White Admitted by Stanford, Black Got Rejections: Exploring Racial Stereotypes in Text-to-Image Generation from a College Admissions Lens

Published: 13 Jan 2025, Last Modified: 26 Feb 2025AAAI 2025 PDLM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Racial Stereotypes, Text-to-Image Model, Fairness
TL;DR: Exploring Racial Stereotypes in Text-to-Image Generation from a College Admissions Lens
Abstract: In this paper, we investigate racial stereotypes in T2I models through the lens of U.S. college admissions. Our findings reveal a significant bias in the generated images: T2I models are more likely to produce images of white students when positive prompts such as "admitted" are used, whereas images of black students are more likely to be generated with negative prompts such as "rejected". We further tested various college admission scenarios, including application outcomes (success/fail), college rankings (top-ranked/non-top-ranked), geographical regions, and the number of students in the generated images. We discovered the following patterns: (1) Overall, white individuals are generated most often in any scene (success/fail, single-person/group), and white males are predominantly generated in successful admission scenes. (2) Dall·E 3 is more likely to revise prompts to be more equitable (by adding descriptions to ensure an equivalent number of individuals from different races) when the original prompts concern top-ranked colleges, but it is less likely to do so for other colleges. (3) Asians are generated more frequently for top-ranked colleges. (4) In Southern college settings, white students form the majority in the generated images, while other races are underrepresented compared to the settings of other regions, such as the Midwest or the North. Overall, our study indicates that T2I models have harmful stereotypes: white males are commonly associated with success, black individuals are often associated with failure, and Asians are linked to intelligence and top-tier institutions. To address this, a simple, bias-free, and user-friendly solution is: when prompted to generate images of humans, the T2I models should present multiple options featuring different racial compositions, allowing users to select their preferred choice.
Submission Number: 38
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview