COMEM: In-Context Retrieval-Augmented Mass-Editing Memory in Large Language ModelsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: One of the core ingredients of LLMs is arguably their extensive large-scale world knowledge which is obtained from a huge pre-training corpus, enabling to provide sufficient background information to complete the reasoning and inference tasks required across various types of NLP tasks. Noting that world knowledge continuously evolves over time, LLMs need to be properly adjusted by performing the ``knowledge editing”, such as updating outdated information or correcting false information, etc. Pursuing the reliable ``massive’’ editing ability in terms of $\textit{generalization}$ and $\textit{specificity}$, this paper proposes a unified knowledge editing method referred to by in-$\textbf{CO}$ntext retrieval-augmented $\textbf{M}$ass-$\textbf{E}$diting $\textbf{M}$emory (COMEM), which combines two types of editing approaches – parameter updating and in-context knowledge editing (IKE). In particular, COMEM includes the $\textit{retrieval-augmented IKE}$, a novel extension of IKE to a massive editing task based on the $\textit{updating}$-aware demonstration construction. Experimental results on the zsRE and CounterFact datasets show that the proposed COMEM outperforms all existing methods, leading to state-of-the-art performance. Our code is available at $\url{https://github.com/xxxx/xxxx}$.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Languages Studied: English
0 Replies

Loading