Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization
Abstract: Large Visual Language Models (LVLMs) have demonstrated impressive capabilities across multiple tasks.
However, their trustworthiness is often challenged by hallucinations, which can be attributed to the modality misalignment and the inherent hallucinations of their underlying Large Language Models (LLMs) backbone.
Existing preference alignment methods focus on aligning model responses with human preferences while neglecting image-text modality alignment, resulting in over-reliance on LLMs and hallucinations.
In this paper, we propose Entity-centric Multimodal Preference Optimization (EMPO), which achieves enhanced modality alignment than existing human preference alignment methods.
Besides, to overcome the scarcity of high-quality multimodal preference data, we utilize open-source instruction datasets to automatically construct high-quality preference data across three aspects: image, instruction, and response.
Experiments on two human preference datasets and five multimodal hallucination benchmarks demonstrate the effectiveness of EMPO, e.g., reducing hallucination rates by 80.4\% on Object HalBench and 52.6\% on MM HalBench, thereby enhancing the trustworthiness of LVLMs. The code and dataset will be made publicly available.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: vision question answering; multimodal QA; Large Vision Language Models; Direct Preference Optimization; Reinforcement learning
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Keywords: vision question answering; multimodal QA; Large Vision Language Models; Direct Preference Optimization; Reinforcement learning
Submission Number: 7012
Loading