Delta - Contrastive Decoding Mitigates Text Hallucinations in Large Language Models

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Contrastive Decoding, Text Hallucination Mitigation, Large Language Models
Abstract:

Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing tasks. Still, they are prone to generating hallucinations—factually incorrect or fabricated content that can undermine their reliability, especially in high-stakes domains such as healthcare and legal advisory. In response to this challenge, we propose Delta, a novel inference-time approach that leverages contrastive decoding to mitigate hallucinations without requiring model retraining or additional training data. Delta works by randomly masking portions of the input prompt, then contrasting the original and masked output distribution generated by the model, effectively mitigating hallucinations through inference-only computations. Delta was evaluated on context-rich QA benchmarks like SQuAD v1.1 and v2, achieving around 3 and 6 percentage points of improvement, respectively. It also showed gains of 7 and 2 percentage points on TriviaQA and Natural Question under-sampling decoding. Delta improved SQuAD v2’s no-answer exact match by over ten percentage points. These findings suggest that Delta is particularly effective when hallucinations arise from contextual ambiguity. Delta presents a computationally efficient and scalable solution for reducing hallucinations in real-world LLM applications by focusing on inference-time enhancements.

Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9674
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview