FastEdit: Low-Rank Structured Regularization for Efficient Model Editing

ICLR 2026 Conference Submission25400 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Model Editing, Knowledge Updating
Abstract: When new knowledge emerges, it is crucial to efficiently update large language models (LLMs) to reflect the latest information. However, state-of-the-art methods widely adopted in the model editing community --- such as MEMIT, PRUNE, and AlphaEdit --- suffer from prohibitively slow editing speeds, often taking 6 to 14 hours to sequentially edit just 2000 facts on models like LLaMA-3-8B, making real-time updates impractical, especially as model scale increases. Moreover, they require extensive pre-computation to sample pre-edit knowledge --- a step that can take over 24 hours --- severely limiting their deployability. In this paper, we present \textbf{FastEdit}, a highly efficient editing framework that enables rapid and scalable model updates. Our key insight is to exploit the low-rank structure inherent in editing updates through a structured regularizer, allowing us to avoid costly inversions via the Sherman-Morrison-Woodbury (SMW) identity. This drastically accelerates the computation of update matrices while preserving edit quality. Crucially, \textbf{FastEdit} requires only a small number of pre-edit samples, reducing both memory and computational overhead. On 2000 sequential edits, \textbf{FastEdit} completes the process in just \textbf{1 hour} -- an order of magnitude faster than prior work -- without sacrificing accuracy. Our method significantly lowers the barrier to practical model editing, enabling timely and scalable knowledge updates in large models.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 25400
Loading