KEIC: A Structured Approach to Editing In-Context Knowledge of Large Language Models in Conversations
Abstract: Large language models (LLMs) are adept at generating coherent and fluent responses within conversational contexts. However, there has been a paucity of comprehensive research exploring LLMs to dynamically update their knowledge in response to corrections of misinformation provided by users during dialogue sessions. In this paper, we present a unified framework termed Knowledge Editing In Conversation (KEIC), along with a human-annotated dataset, devised to assess the efficacy of LLMs in aligning the user update in an in-context setting, wherein the previous chat containing a false statement that conflicts with the subsequent user update. Through in-depth investigations, we observe that the contemporary LLMs exhibit a modicum of proficiency in this task. To enhance their KEIC abilities, we propose a structured strategy to handle the information update for LLMs in a multi-turn conversation. We demonstrate that our approach is effective and suggest insights for research communities in this emerging and essential issue.
Paper Type: Long
Research Area: Discourse and Pragmatics
Research Area Keywords: LLM, self-correction, misinformation correction, dataset
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 3077
Loading