Benchmarking Diversity in Image Generation via Attribute-Conditional Human Evaluation

ICLR 2026 Conference Submission17320 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diversity evaluation, text to image models evaluation, human evaluation of diversity
TL;DR: We introduce a framework using human evaluation, curated prompts, and statistical analysis to systematically evaluate diversity of text-to-image models
Abstract: Despite advances in photorealistic image generation, current text-to-image (T2I) models often lack diversity, generating homogeneous outputs. This work introduces a framework to address the need for robust diversity evaluation in T2I models. Our framework systematically assesses diversity by evaluating individual concepts and their relevant factors of variation. Key contributions include: (1) a novel human evaluation template for nuanced diversity assessment; (2) a curated prompt set covering diverse concepts with their identified factors of variation (e.g. prompt: An image of an apple, factor of variation: color); and (3) a methodology for comparing models in terms of human annotations via binomial tests. Furthermore, we rigorously compare various image embeddings for diversity measurement. Notably, our principled approach enables ranking of T2I models by diversity, identifying categories where they particularly struggle. This research offers a robust methodology and insights, paving the way for improvements in T2I model diversity and metric development.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 17320
Loading