Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Generative models, Diffusion probabilistic models, Controlled generation, Human Feedback, RLHF
TL;DR: We show very small amount of human feedback is sufficient for aligning a diffusion model's sample generation with specified requirements.
Abstract: Diffusion models have recently shown remarkable success in high-quality image generation. Sometimes, however, a pre-trained diffusion model exhibits partial misalignment in the sense that the model can generate good images, but it sometimes outputs undesirable images. If so, we simply need to prevent the generation of the bad images, and we call this task censoring. In this work, we present censored generation with a pre-trained diffusion model using a reward model trained on minimal human feedback. We show that censoring can be accomplished with extreme human feedback efficiency and that labels generated with a mere few minutes of human feedback are sufficient.
Supplementary Material: zip
Submission Number: 2541