A Survey on Knowledge Conflicts in the Era of LLMsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: This survey presents a comprehensive examination of knowledge conflicts in Large Language Models (LLMs). It explores the intricate challenges that arise when LLMs integrate contextual knowledge with their parametric knowledge. Our focus is on three primary types of knowledge conflicts: context-memory, inter-context, and intra-memory conflict. These conflicts can significantly impact the trustworthiness and accuracy of LLMs, especially in real-world applications where misinformation and noise are prevalent. The survey categorizes these conflicts, investigates their causes, and reviews potential mitigation strategies. It aims to provide insights into enhancing the robustness of LLMs, making it a valuable resource for advancing research in this evolving area.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Surveys
Languages Studied: English
0 Replies

Loading