Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a multiagent LLM-MLLM method to correct hallucinated captions and introduce new evaluation metrics that better align with human judgments in detailed image captioning.
Abstract: Multimodal large language models (MLLMs) excel at generating highly detailed captions but often produce hallucinations. Our analysis reveals that existing hallucination detection methods struggle with detailed captions. We attribute this to the increasing reliance of MLLMs on their generated text, rather than the input image, as the sequence length grows. To address this issue, we propose a multiagent approach that leverages LLM-MLLM collaboration to correct given captions. Additionally, we introduce an evaluation framework and a benchmark dataset to facilitate the systematic analysis of detailed captions. Our experiments demonstrate that the proposed evaluation method aligns better with human judgments of factuality than existing metrics. Moreover, we show that current approaches for enhancing MLLM factuality often fail in hyper-detailed image captioning tasks. In contrast, our approach significantly enhances the factual accuracy of captions, even improving those generated by GPT-4V. Finally, we highlight a limitation of VQA-centric benchmarking by demonstrating that an MLLM's performance on VQA benchmarks may not correlate with its ability to generate detailed image captions.
Lay Summary: Multimodal AI systems, like GPT-4V, can describe images in rich detail, but they often make things up. This is especially true when the captions get long and specific, as the models tend to rely more on their own generated words than on the actual image. Current tools for detecting these hallucinations don’t work well on such detailed captions. To tackle this, we developed a new method that uses two AI agents, a language model and a vision-language model, that work together to correct inaccurate image descriptions. We also created a benchmark and evaluation tool that more accurately judges how truthful these captions are, based on how humans would assess them. Our experiments show that this approach significantly improves the accuracy of even the strongest models and exposes weaknesses in common evaluation tests like VQA. This work helps move us closer to AI that we can trust to describe images accurately, especially in high-stakes or detail-critical applications.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/adobe-research/CapMAS
Primary Area: Applications->Computer Vision
Keywords: Image captioning, Caption evaluation, Multimodal large language models, Large language models
Submission Number: 1286
Loading