Keywords: Vision Language Model, Instruction Tuning, Evaluation
TL;DR: We propose an automatically-generated benchmark (CIEM) to evaluate the hallucination problem in VLM, and contrastive instruction tuning (CIT) to alleviate the problem.
Abstract: Nowadays, the research on Large Vision-Language Models (LVLMs) has been significantly promoted thanks to the success of Large Language Models (LLM). Nevertheless, these Vision-Language Models (VLMs) are suffering from the drawback of hallucination -- due to insufficient understanding of vision and language modalities, VLMs may generate incorrect perception information when doing downstream applications, for example, captioning a non-existent entity. To address the hallucination phenomenon, on the one hand, we introduce a $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{E}$valuation $\textbf{M}$ethod (CIEM), which is an automatic pipeline that leverages an annotated image-text dataset coupled with an LLM to generate factual/contrastive question-answer pairs for the evaluation of the hallucination of VLMs. On the other hand, based on CIEM, we further propose a new instruction tuning method called CIT (the abbreviation of $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{T}$uning) to alleviate the hallucination of VLMs by automatically producing high-quality factual/contrastive question-answer pairs and corresponding justifications for model tuning. Through extensive experiments on CIEM and CIT, we pinpoint the hallucination issues commonly present in existing VLMs, the disability of the current instruction-tuning dataset to handle the hallucination phenomenon and the superiority of CIT-tuned VLMs over both CIEM and public datasets. Please contact the authors for code and generated dataset.
Submission Number: 62
Loading