Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Reliable Response Generation in the Wild
Abstract: The proliferation of large language models (LLMs) has advanced in response generation (RG). Unfortunately, LLMs inevitably face knowledge conflicts between internal memory and retrieved external context, which undermines response reliability and introduces uncertainty in decision-making. In this work, we analyze how LLMs navigate knowledge conflicts from an information-theoretic perspective and reveal that when conflicting and supplementary information exhibit significant differences, LLMs confidently resolve their preferences. However, when the distinction is ambiguous, LLMs experience heightened uncertainty. Based on this insight, we propose Sliding-Window Variational Information Bottleneck (SwinVIB), which embeds a pipeline of VIB modules into the retrieval augmentation to reject uncertainty-inducing context while retaining the most decisive context in a sliding window and guiding LLM preference. Extensive experiments confirm our theory and show that SwinVIB improves multiple-choice accuracy by 10.61% over the baseline.
Paper Type: Long
Research Area: Generation
Research Area Keywords: analysis; retrieval-augmented generation
Contribution Types: NLP engineering experiment, Theory
Languages Studied: English
Submission Number: 2246
Loading