Abstract: The steady improvements of text-to-image (T2I) generative models lead to slow deprecation of automatic evaluation benchmarks that rely on static datasets, motivating researchers to seek alternative ways to evaluate the T2I progress. In this paper, we explore the potential of multi-modal large language models (MLLMs) as evaluator agents that interact with a T2I model, with the objective of assessing prompt-generation consistency and image aesthetics. We present Multimodal Text-to-Image Eval (MT2IE), an evaluation framework that iteratively generates prompts for evaluation, scores generated images and matches T2I evaluation of existing benchmarks with a fraction of the prompts used in existing static benchmarks. We show that MT2IE’s prompt-generation consistency scores have higher correlation with human judgment than prompt consistency metrics previously introduced
in the literature. MT2IE generates prompts that are efficient at probing T2I model performance, producing the same relative T2I model rankings as existing benchmarks while evaluating on 80× less prompts. We hope that these results will unlock the development of dynamic and interactive evaluation frameworks, and mitigate the deprecation of automatic evaluation benchmarks.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=RcdX7U5CGw&nesting=2&sort=date-desc
Changes Since Last Submission: **Changes regarding reviewer's comments (more details in Overall Reviewer Response):**
* To better align with the paper’s core contributions, we have relocated Section 5.1 on aesthetics to Appendix A and have removed statements from the main text that present MT2IE as a general-purpose evaluation method for tasks beyond prompt consistency.
* We have performed additional experiments using MT2IE with more MLLMs, specifically spanning a wider range of model sizes, with results in Appendix E. Results show that rank correlation scales with MLLM size, with larger MLLMs yielding more aligned T2I model rankings and lower variance. However, MT2IE’s rank correlations are higher than existing evaluation methods even when used with the smallest, 7-billion MLLM, showcasing that our results hold across MLLM sizes.
* A discussion of the limitations of MLLMs, their impact on the MT2IE framework, and potential future directions to address these issues has been added to Appendix I.
* We have added writing to clarify how seed prompts are generated, specifying that their topics were carefully chosen to represent the full range of COCO’s categories.
* Figures 7 and 9 have been reformatted into subfigures for better readability and clarity.
Assigned Action Editor: ~Chinmay_Hegde1
Submission Number: 5137
Loading