POMEM: In-Context Knowledge Post Editing on Massive-Editing Memory in Language Language Models

ACL ARR 2024 June Submission5485 Authors

16 Jun 2024 (modified: 03 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Parameter updating (PU), while being widely used in $\textit{knowledge editing}$, has still shown limited performances in terms of generalization and locality metrics, likely due to the catastrophic forgetting, the riffle effects, or the unseen contexts. This paper proposes a novel $\textit{in-context post-editing}$, which is subsequently applied to the PU-based prediction results, namely $\textbf{POMEM}$ -- In-context knowledge $\textbf{po}$st $\textbf{e}$diting on $\textbf{m}$assive-$\textbf{e}$diting $\textbf{m}$emory -- which consists of two different types of in-context post-editing prompting method, divided into the "in-scope" and "out-of-scope" post-editing methods, shortly referred to as Copier and Recaller, respectively; 1) $\textbf{Copier}$ is specially designed for in-scope cases, mainly aiming to further enhance the generalization editing ability; 2) $\textbf{Recaller}$ is designed for out-of-scope cases, which involves a novel “recalling” prompt which aims to recover the prediction result of "original pre-edited" model under using the PU-based "edited“ model. Experiment results on Counterfact dataset show that POMEM leads to the state-of-the-art performances. Our codes are publicly available at \url{https://github.com/XXX/XXX}.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Knowledge-Editing, prompt learning, Post-Editing
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 5485
Loading