Do Vision Encoders Truly Explain Object Hallucination?: Mitigating Object Hallucination via Simple Fine-Grained CLIPScore
Abstract: Recently, Large Vision-Language Models (LVLMs) show remarkable performance across various domains.
However, these models suffer from object hallucination.
In this work, we study object hallucination primarily in a discriminative, retrieval-style evaluation setting (OHD-Caps), rather than in free-form caption generation.
This study revisits the previous claim that the cause of such hallucinations lies in the limited representational capacity of the vision encoder.
Our analysis implies that the capacity of the vision encoder is not necessarily a major limiting factor in detecting object hallucination.
Based on this insight, we propose Fine-grained CLIPScore (F-CLIPScore), a simple yet effective evaluation metric that enhances object-level granularity by incorporating text embeddings at the noun level.
Evaluations on the OHD-Caps benchmark show that F-CLIPScore significantly outperforms conventional CLIPScore in accuracy by a large margin of 39.6% without additional training.
We further demonstrate that F-CLIPScore-based data filtering reduces object hallucination in LVLM (4.9% in POPE accuracy after alignment pretraining).
Our code is publicly available at https://github.com/abzb1/f-clip
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/abzb1/f-clip
Supplementary Material: zip
Assigned Action Editor: ~Chunyuan_Li1
Submission Number: 5742
Loading