Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data

Published: 12 Oct 2024, Last Modified: 14 Nov 2024SafeGenAi PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: hallucination detection, hallucinations, reliability, multimodal language models
TL;DR: We pose multimodal hallucination detection as a sequence labeling task where models localize hallucinated spans and present a strong baseline. Given high human annotation costs for this task, we propose a method to improve model sample efficiency.
Abstract: Multimodal language models can exhibit hallucinations in their outputs, which limits their reliability. The ability to automatically detect these errors is important for mitigating them, but has been less explored and existing efforts do not localize hallucinations, instead framing this as a classification task. In this work, we first pose multimodal hallucination detection as a sequence labeling task where models must localize hallucinated text spans and present a strong baseline model. Given the high cost of human annotations for this task, we propose an approach to improve the sample efficiency of these models by creating corrupted grounding data, which we use for pre-training. Leveraging phrase grounding data, we generate hallucinations to replace grounded spans and create hallucinated text. Experiments show that pre-training on this data improves sample efficiency when fine-tuning, and that the learning signal from the grounding data plays an important role in these improvements.
Submission Number: 57
Loading