The Collision of Blades Reveals Their Sharpness: Retrieval-Augmented Generation Consolidate by Knowledge Conflict Exposure
Abstract: To alleviate hallucination and outdated knowledge in LLMs, current LLM systems frequently integrate retrieval-augmented generation (RAG) techniques to form a RAG-LLM system. However, misinformation and disinformation are prevalent in external corpus, seriously threaten the system's reliability, which makes the consolidation necessary. Even though many approaches based on the credibility of external content have achieved impressive performance, it remains challenging how the additional assessment could be perceived and ultimately utilized by LLMs. Inspired by cognitive conflict theory, we propose an approach to consolidate the RAG-LLM systems through knowledge conflict exposure. To reveal potential knowledge inconsistency, our approach designs a novel information expansion strategy, introducing comparative content from both high-level intent and fine-grained supporting materials. Through knowledge extraction and conflict prompting, it achieves more effective consolidation. Experimental evaluations demonstrate that our proposed approach can achieve average performance increase of 10% compared to baseline approaches, underscoring its efficacy in improving LLM output.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: retrieval-augmented models,security and privacy,robustness
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3359
Loading