FREAK: A Fine-grained Hallucination Evaluation Benchmark for Advanced MLLMs

ICLR 2026 Conference Submission17171 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: MLLM, VLM, Hallucination, Benchmark, Chain-of-Thought
TL;DR: This paper proposes a novel benchmark designing for MLLMs' fine-grained hallucination evaluation, revealing the severity of fine-grained hallucinations in advanced MLLMs and experimentally analyzes the limitations of CoT in hallucination tasks.
Abstract: Multimodal Large Language Models (MLLMs) suffer from hallucinations. Existing hallucination evaluation benchmarks are often limited by over-simplified tasks leading to saturated metrics, or insufficient diversity that fails to adequately assess the hallucination extent in state-of-the-art multimodal models. To address this gap, we propose FREAK, a comprehensive multimodal benchmark designed for fine-grained hallucination assessment in MLLMs. Through high-quality photorealistic images featuring fine-grained counter-commonsense edits, FREAK innovatively evaluates hallucination phenomena in detailed visual perception of MLLMs. Extensive experiments on FREAK show severe hallucination issues in SOTA models regarding detailed visual perception. To enable deeper investigation, we curate a controlled subset to indirectly evaluate the model’s ability to perceive target detailed information. Through systematic evaluation of prevailing Chain-of-Thought (CoT) prompting techniques within this task, we reveal critical insights regarding hallucination patterns and model reasoning processes.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 17171
Loading