Keywords: large vision-language models, multi-modal large language models, object hallucination detection
TL;DR: We propose GLSim, a training-free framework that combines global and local embedding similarity signals for accurate object hallucination detection in LVLMs, outperforming prior methods.
Abstract: Object hallucination in large vision-language models presents a significant challenge to their safe deployment in real-world applications.
Recent works have proposed object-level hallucination scores to estimate the likelihood of object hallucination; however, these methods typically adopt either a global or local perspective in isolation, which may limit detection reliability.
In this paper, we introduce GLSim, a novel training-free object hallucination detection framework that leverages complementary global and local embedding similarity signals between image and text modalities, enabling more accurate and reliable hallucination detection in diverse scenarios.
We comprehensively benchmark existing object hallucination detection methods and demonstrate that GLSim achieves superior detection performance, outperforming competitive baselines by a significant margin.
Submission Number: 39
Loading