Keywords: Text-to-Image Generation, Personalized Image Generation, Evaluation, Benchmark
Abstract: Recent multimodal image generators such as GPT-4o, Gemini 2.0 Flash, and Gemini 2.5 Pro excel at following complex instructions, editing images and maintaining concept consistency. However, they are still evaluated by disjoint toolkits: text-to-image (T2I) benchmarks that lacks multi-modal conditioning, and customized image generation benchmarks that overlook compositional semantics and common knowledge. We propose **MMIG-Bench**, a comprehensive **M**ulti-**M**odal **I**mage **G**eneration **Bench**mark that unifies these tasks by pairing 4,850 richly annotated text prompts with 1,750 multi-view reference images across 380 subjects, spanning humans, animals, objects, and artistic styles. **MMIG-Bench** is equipped with a three-level evaluation framework: (1) low-level metrics for visual artifacts and identity preservation of objects; (2) novel Aspect Matching Score (AMS): a VQA-based mid-level metric that delivers fine-grained prompt-image alignment and shows strong correlation with human judgments; and (3) high-level metrics for aesthetics and human preference. Using **MMIG-Bench**, we benchmark 17 state-of-the-art models, including Gemini 2.5 Pro, FLUX, DreamBooth, and IP-Adapter, and validate our metrics with 32k human ratings, yielding in-depth insights into architecture and data design.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/zengziyun/MMIG-Bench
Supplementary Material: pdf
Primary Area: Datasets & Benchmarks for applications in computer vision
Submission Number: 717
Loading