Improving Sequential Model Editing with Fact Retrieval

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Language Modeling and Analysis of Language Models
Submission Track 2: Theme Track: Large Language Models and the Future of NLP
Keywords: Model Editing; Sequential Model Editing; Pre-trained Language model
TL;DR: A fact-aware retrieval-based editing method achieves efficient, stable, and scalable sequential model editing.
Abstract: The task of sequential model editing is to fix erroneous knowledge in Pre-trained Language Models (PLMs) efficiently, precisely and continuously. Although existing methods can deal with a small number of modifications, these methods experience a performance decline or require additional annotated data, when the number of edits increases. In this paper, we propose a $\textbf{R}$etrieval $\textbf{A}$ugmented $\textbf{S}$equential Model $\textbf{E}$diting framework ($\textbf{RASE}$) that leverages factual information to enhance editing generalization and to guide the identification of edits by retrieving related facts from the fact-patch memory we constructed. Our main findings are: (i) State-of-the-art models can hardly correct massive mistakes stably and efficiently; (ii) Even if we scale up to thousands of edits, RASE can significantly enhance editing generalization and maintain consistent performance and efficiency; (iii) RASE can edit large-scale PLMs and increase the performance of different editors. Moreover, it can integrate with ChatGPT and further improve performance. Our code and data are available at: https://github.com/sev777/RASE.
Submission Number: 2048
Loading