Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models

NeurIPS 2023 Workshop ICBINB Submission37 Authors

Published: 27 Oct 2023, Last Modified: 01 Dec 2023ICBINB 2023EveryoneRevisionsBibTeX
Keywords: diffusion models, social bias, fairness, generative AI, text to image
TL;DR: We uncover social biases in text-to-image diffusion models, particularly in under-explored downstream tasks like image editing and classification
Abstract: Text-to-image diffusion models have been adopted into key commercial workflows, such as art generation and image editing. Characterizing the implicit social biases they exhibit, such as gender and racial stereotypes, is a necessary first step in avoiding discriminatory outcomes. While existing studies on social bias focus on image generation, the biases exhibited in alternate applications of diffusion-based foundation models remain under-explored. We propose a framework that uses synthetic images to probe two applications of diffusion models, image editing and classification, for social bias. Using our framework, we uncover meaningful and significant inter-sectional social biases in Stable Diffusion, a state-of-the-art open-source text-to-image model. Our findings caution against the uninformed adoption of text-to-image foundation models for downstream tasks and services.
Submission Number: 37