Diffusion fashions have lately emerged because the de facto normal for producing advanced, high-dimensional outputs. You could know them for his or her capability to supply gorgeous AI artwork and hyper-realistic artificial photos, however they’ve additionally discovered success in different functions corresponding to drug design and steady management. The important thing concept behind diffusion fashions is to iteratively remodel random noise right into a pattern, corresponding to a picture or protein construction. That is sometimes motivated as a most chance estimation drawback, the place the mannequin is educated to generate samples that match the coaching knowledge as intently as potential.
Nonetheless, most use instances of diffusion fashions are usually not instantly involved with matching the coaching knowledge, however as a substitute with a downstream goal. We don’t simply need a picture that appears like present photos, however one which has a selected kind of look; we don’t simply need a drug molecule that’s bodily believable, however one that’s as efficient as potential. On this submit, we present how diffusion fashions could be educated on these downstream targets instantly utilizing reinforcement studying (RL). To do that, we finetune Secure Diffusion on quite a lot of targets, together with picture compressibility, human-perceived aesthetic high quality, and prompt-image alignment. The final of those targets makes use of suggestions from a big vision-language mannequin to enhance the mannequin’s efficiency on uncommon prompts, demonstrating how highly effective AI fashions can be utilized to enhance one another with none people within the loop.
A diagram illustrating the prompt-image alignment goal. It makes use of LLaVA, a big vision-language mannequin, to judge generated photos.
Denoising Diffusion Coverage Optimization
When turning diffusion into an RL drawback, we make solely probably the most fundamental assumption: given a pattern (e.g. a picture), we have now entry to a reward operate that we are able to consider to inform us how “good” that pattern is. Our purpose is for the diffusion mannequin to generate samples that maximize this reward operate.
Diffusion fashions are sometimes educated utilizing a loss operate derived from most chance estimation (MLE), that means they’re inspired to generate samples that make the coaching knowledge look extra probably. Within the RL setting, we now not have coaching knowledge, solely samples from the diffusion mannequin and their related rewards. A technique we are able to nonetheless use the identical MLE-motivated loss operate is by treating the samples as coaching knowledge and incorporating the rewards by weighting the loss for every pattern by its reward. This provides us an algorithm that we name reward-weighted regression (RWR), after present algorithms from RL literature.
Nonetheless, there are a couple of issues with this strategy. One is that RWR just isn’t a very actual algorithm — it maximizes the reward solely roughly (see Nair et. al., Appendix A). The MLE-inspired loss for diffusion can be not actual and is as a substitute derived utilizing a variational sure on the true chance of every pattern. Which means RWR maximizes the reward by way of two ranges of approximation, which we discover considerably hurts its efficiency.
We consider two variants of DDPO and two variants of RWR on three reward capabilities and discover that DDPO persistently achieves the perfect efficiency.
The important thing perception of our algorithm, which we name denoising diffusion coverage optimization (DDPO), is that we are able to higher maximize the reward of the ultimate pattern if we take note of the complete sequence of denoising steps that obtained us there. To do that, we reframe the diffusion course of as a multi-step Markov determination course of (MDP). In MDP terminology: every denoising step is an motion, and the agent solely will get a reward on the ultimate step of every denoising trajectory when the ultimate pattern is produced. This framework permits us to use many highly effective algorithms from RL literature which can be designed particularly for multi-step MDPs. As an alternative of utilizing the approximate chance of the ultimate pattern, these algorithms use the precise chance of every denoising step, which is extraordinarily simple to compute.
We selected to use coverage gradient algorithms as a consequence of their ease of implementation and previous success in language mannequin finetuning. This led to 2 variants of DDPO: DDPOSF, which makes use of the straightforward rating operate estimator of the coverage gradient also referred to as REINFORCE; and DDPOIS, which makes use of a extra highly effective significance sampled estimator. DDPOIS is our best-performing algorithm and its implementation intently follows that of proximal coverage optimization (PPO).
Finetuning Secure Diffusion Utilizing DDPO
For our essential outcomes, we finetune Secure Diffusion v1-4 utilizing DDPOIS. We’ve 4 duties, every outlined by a special reward operate:
- Compressibility: How simple is the picture to compress utilizing the JPEG algorithm? The reward is the adverse file dimension of the picture (in kB) when saved as a JPEG.
- Incompressibility: How exhausting is the picture to compress utilizing the JPEG algorithm? The reward is the constructive file dimension of the picture (in kB) when saved as a JPEG.
- Aesthetic High quality: How aesthetically interesting is the picture to the human eye? The reward is the output of the LAION aesthetic predictor, which is a neural community educated on human preferences.
- Immediate-Picture Alignment: How properly does the picture symbolize what was requested for within the immediate? This one is a little more sophisticated: we feed the picture into LLaVA, ask it to explain the picture, after which compute the similarity between that description and the unique immediate utilizing BERTScore.
Since Secure Diffusion is a text-to-image mannequin, we additionally want to select a set of prompts to present it throughout finetuning. For the primary three duties, we use easy prompts of the shape “a(n) [animal]”. For prompt-image alignment, we use prompts of the shape “a(n) [animal] [activity]”, the place the actions are “washing dishes”, “taking part in chess”, and “driving a motorbike”. We discovered that Secure Diffusion typically struggled to supply photos that matched the immediate for these uncommon eventualities, leaving loads of room for enchancment with RL finetuning.
First, we illustrate the efficiency of DDPO on the straightforward rewards (compressibility, incompressibility, and aesthetic high quality). All the photos are generated with the identical random seed. Within the high left quadrant, we illustrate what “vanilla” Secure Diffusion generates for 9 completely different animals; the entire RL-finetuned fashions present a transparent qualitative distinction. Curiously, the aesthetic high quality mannequin (high proper) tends in the direction of minimalist black-and-white line drawings, revealing the sorts of photos that the LAION aesthetic predictor considers “extra aesthetic”.
Subsequent, we exhibit DDPO on the extra advanced prompt-image alignment activity. Right here, we present a number of snapshots from the coaching course of: every sequence of three photos exhibits samples for a similar immediate and random seed over time, with the primary pattern coming from vanilla Secure Diffusion. Curiously, the mannequin shifts in the direction of a extra cartoon-like fashion, which was not intentional. We hypothesize that it’s because animals doing human-like actions usually tend to seem in a cartoon-like fashion within the pretraining knowledge, so the mannequin shifts in the direction of this fashion to extra simply align with the immediate by leveraging what it already is aware of.
Sudden Generalization
Shocking generalization has been discovered to come up when finetuning giant language fashions with RL: for instance, fashions finetuned on instruction-following solely in English typically enhance in different languages. We discover that the identical phenomenon happens with text-to-image diffusion fashions. For instance, our aesthetic high quality mannequin was finetuned utilizing prompts that had been chosen from a listing of 45 frequent animals. We discover that it generalizes not solely to unseen animals but in addition to on a regular basis objects.
Our prompt-image alignment mannequin used the identical listing of 45 frequent animals throughout coaching, and solely three actions. We discover that it generalizes not solely to unseen animals but in addition to unseen actions, and even novel mixtures of the 2.
Overoptimization
It’s well-known that finetuning on a reward operate, particularly a realized one, can result in reward overoptimization the place the mannequin exploits the reward operate to attain a excessive reward in a non-useful means. Our setting isn’t any exception: in all of the duties, the mannequin finally destroys any significant picture content material to maximise reward.
We additionally found that LLaVA is inclined to typographic assaults: when optimizing for alignment with respect to prompts of the shape “[n] animals”, DDPO was in a position to efficiently idiot LLaVA by as a substitute producing textual content loosely resembling the proper quantity.
There’s presently no general-purpose methodology for stopping overoptimization, and we spotlight this drawback as an essential space for future work.
Conclusion
Diffusion fashions are exhausting to beat relating to producing advanced, high-dimensional outputs. Nonetheless, to this point they’ve principally been profitable in functions the place the purpose is to be taught patterns from heaps and plenty of knowledge (for instance, image-caption pairs). What we’ve discovered is a approach to successfully practice diffusion fashions in a means that goes past pattern-matching — and with out essentially requiring any coaching knowledge. The chances are restricted solely by the standard and creativity of your reward operate.
The way in which we used DDPO on this work is impressed by the current successes of language mannequin finetuning. OpenAI’s GPT fashions, like Secure Diffusion, are first educated on large quantities of Web knowledge; they’re then finetuned with RL to supply helpful instruments like ChatGPT. Sometimes, their reward operate is realized from human preferences, however others have extra lately found out the best way to produce highly effective chatbots utilizing reward capabilities primarily based on AI suggestions as a substitute. In comparison with the chatbot regime, our experiments are small-scale and restricted in scope. However contemplating the big success of this “pretrain + finetune” paradigm in language modeling, it definitely looks like it’s price pursuing additional on the planet of diffusion fashions. We hope that others can construct on our work to enhance giant diffusion fashions, not only for text-to-image era, however for a lot of thrilling functions corresponding to video era, music era, picture enhancing, protein synthesis, robotics, and extra.
Moreover, the “pretrain + finetune” paradigm just isn’t the one means to make use of DDPO. So long as you could have a great reward operate, there’s nothing stopping you from coaching with RL from the beginning. Whereas this setting is as-yet unexplored, it is a place the place the strengths of DDPO may actually shine. Pure RL has lengthy been utilized to all kinds of domains starting from taking part in video games to robotic manipulation to nuclear fusion to chip design. Including the highly effective expressivity of diffusion fashions to the combination has the potential to take present functions of RL to the subsequent stage — and even to find new ones.
This submit relies on the next paper:
If you wish to be taught extra about DDPO, you may try the paper, web site, authentic code, or get the mannequin weights on Hugging Face. If you wish to use DDPO in your personal undertaking, try my PyTorch + LoRA implementation the place you may finetune Secure Diffusion with lower than 10GB of GPU reminiscence!
If DDPO evokes your work, please cite it with:
@misc{black2023ddpo,
title={Coaching Diffusion Fashions with Reinforcement Studying},
writer={Kevin Black and Michael Janner and Yilun Du and Ilya Kostrikov and Sergey Levine},
yr={2023},
eprint={2305.13301},
archivePrefix={arXiv},
primaryClass={cs.LG}
}