Aging with GRACE: Lifelong Model Editing with Discrete Key-Value AdaptorsDownload PDF

Published: 18 Nov 2022, Last Modified: 08 Sept 2024RobustSeq @ NeurIPS 2022 OralReaders: Everyone
Abstract: Large language models often err during deployment due to non-representative training data or distribution shift in the test set. Recently, model editors have been proposed to fix errors by adjusting a pre-trained model's weights. However, these approaches quickly decay a model's performance on upstream data, and forget how to fix previous errors. We propose and study a novel Lifelong Model Editing setting, where errors stream into a deployed model and we update the model to correct its predictions without influencing it for unrelated inputs. We propose General Retrieval Adaptors for Continual Editing, or GRACE, which learns and caches a particular layer's activations in a codebook as edits stream in, while the original model weights remain frozen. This ensures similar edits are treated similarly without altering the model's performance on unrelated instances. Experimentally, we show that GRACE substantially improves over recent model editors.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/aging-with-grace-lifelong-model-editing-with/code)
0 Replies

Loading