Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Model Editing, Continual Learning, Model Repair
TL;DR: We show how to make long sequences of targeted edits to specific behaviors of pre-trained models by learning and caching discrete segments in their latent space in which their layer-to-layer transformations are modified
Abstract: Deployed language models decay over time due to shifting inputs, changing user needs, or emergent world-knowledge gaps. When such problems are identified, we want to make targeted edits while avoiding expensive retraining. However, current model editors, which modify such behaviors of pre-trained models, degrade model performance quickly across multiple, sequential edits. We propose GRACE, a \textit{lifelong} model editing method, which implements spot-fixes on streaming errors of a deployed model, ensuring minimal impact on unrelated inputs. GRACE writes new mappings into a pre-trained model's latent space, creating a discrete, local codebook of edits without altering model weights. This is the first method enabling thousands of sequential edits using only streaming errors. Our experiments on T5, BERT, and GPT models show GRACE's state-of-the-art performance in making and retaining edits, while generalizing to unseen inputs. Our code is available at [github.com/thartvigsen/grace](https://www.github.com/thartvigsen/grace}).
Submission Number: 6611
Loading