Prescribing the right remedy: Mitigating hallucinations in large vision-language models via targeted instruction tuning
Abstract: Despite achieving outstanding performance on various cross-modal tasks, current large vision-language models (LVLMs) still suffer from hallucination issues, which manifest as inconsistencies between their generated responses and the corresponding images. Prior research has indicated that the low quality of instruction data, especially the skewed balance between positive and negative samples, is a significant contributor to model hallucinations. Recently, researchers have developed high-quality instruction datasets, such as LRV-Instruction, to mitigate model hallucinations. Nonetheless, our investigation reveals that hallucinatory concepts from different LVLMs exhibit specificity, i.e. the distribution of hallucinatory concepts varies significantly across models. Existing datasets did not consider the hallucination specificity of different models in the design process, thus limiting their efficacy in mitigating model hallucination. In this paper, we propose a targeted instruction data generation framework named DFTG that tailored for the hallucination specificity of different models. Concretely, DFTG consists of two stages: hallucination diagnosis, which extracts the necessary information from the model's responses and images for hallucination diagnosis; and targeted data generation, which generates targeted instruction data based on diagnostic results. The experimental results on hallucination benchmarks demonstrate that the targeted instruction data generated by our method are more effective in mitigating hallucinations compared to previous datasets.
External IDs:dblp:journals/isci/HuTWLS25
Loading