Benchmarking Diversity in Text-to-Image Models via Attribute-Conditional Human Evaluation

09 May 2025 (modified: 30 Oct 2025)Submitted to NeurIPS 2025 Datasets and Benchmarks TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diversity evaluation, text to image models evaluation, human evaluation of diversity
TL;DR: We introduce a framework using human evaluation, curated prompts, and statistical analysis to systematically evaluate diversity of text-to-image models
Abstract: Despite advancements in photorealistic image generation, current text-to-image (T2I) models often lack diversity, generating homogeneous outputs. This work introduces a framework to address the need for robust diversity evaluation in T2I models. Our framework systematically assesses diversity by evaluating individual concepts and their relevant factors of variation. Key contributions include: (1) a novel human evaluation template for nuanced diversity assessment; (2) a curated prompt set covering diverse concepts with their identified factors of variation (e.g. prompt: $\textit{An image of an apple}$, factor of variation: color); and (3) a methodology for comparing models in terms of human annotations via binomial tests. Furthermore, we rigorously compare various image embeddings for diversity measurement. Our principled approach enables ranking of T2I models by diversity, identifying categories where they particularly struggle. This research offers a robust methodology and insights, paving the way for improvements in T2I model diversity and metric development.
Croissant File: json
Dataset URL: https://storage.googleapis.com/t2i-diversity/t2i_diversity/t2i_diversity.csv
Supplementary Material: pdf
Primary Area: Evaluation (e.g., data collection methodology, data processing methodology, data analysis methodology, meta studies on data sources, extracting signals from data, replicability of data collection and data analysis and validity of metrics, validity of data collection experiments, human-in-the-loop for data collection, human-in-the-loop for data evaluation)
Flagged For Ethics Review: true
Submission Number: 989
Loading