Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization

Published: 20 Jul 2024, Last Modified: 06 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent advancements in diffusion models trained on large-scale data have enabled the generation of indistinguishable human-level images, yet they often produce harmful content misaligned with human values, e.g., social bias, and offensive content. Despite extensive research on Large Language Models (LLMs), the challenge of Text-to-Image (T2I) model alignment remains largely unexplored. Addressing this problem, we propose LiVO (Lightweight Value Optimization), a novel lightweight method for aligning T2I models with human values. LiVO only optimizes a plug-and-play value encoder to integrate a specified value principle with the input prompt, allowing the control of generated images over both semantics and values. Specifically, we design a diffusion model-tailored preference optimization loss, which theoretically approximates the Bradley-Terry model used in LLM alignment but provides a more flexible trade-off between image quality and value conformity. To optimize the value encoder, we also develop a framework to automatically construct a text-image preference dataset of 86k (prompt, aligned image, violating image, value principle) samples. Without updating most model parameters and through adaptive value selection from the input prompt, LiVO significantly reduces harmful outputs and achieves faster convergence, surpassing several strong baselines and taking an initial step towards ethically aligned T2I models. Warning: This paper involves descriptions and images depicting discriminatory, pornographic, bloody, and horrific scenes, which some readers may find offensive or disturbing.
Primary Subject Area: [Generation] Social Aspects of Generative AI
Secondary Subject Area: [Generation] Generative Multimedia
Relevance To Conference: Our work focuses on addressing the ethical issues of text-to-image models, typically the Stable Diffusion, which involves two modalities: text and image. We specifically develop a preference optimization loss tailored to diffusion models, which theoretically approximates the Bradley-Terry model used in LLMs but offers greater flexibility in balancing image quality and adherence to values. We believe our work could contribute to better aligning the text-to-image models with human value.
Supplementary Material: zip
Submission Number: 5339
Loading