FastEdit: Low-Rank Structured Regularization for Efficient Model Editing

20 Sept 2025 (modified: 06 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Model Editing, Knowledge Updating
Abstract: When new knowledge emerges, it is crucial to efficiently update large language models (LLMs) to reflect the latest information. However, state-of-the-art methods widely adopted in the model editing community—such as MEMIT, EMMET, and AlphaEdit—suffer from prohibitively slow editing speeds, often taking over 15 hours to sequentially edit 5,000 facts on models like LLaMA-3-8B, making real-time updates impractical, especially as model scale increases. Moreover, they require extensive pre-computation to sample pre-edit knowledge—a step that can take over 24 hours—severely limiting their deployability. In this paper, we present \textbf{FastEdit}, a framework that leverages the intrinsic low-rank structure of FFN key spaces not only for speed but also for more effective editing. FastEdit regularizes only the low-rank primary semantic subspace—where most pre-edit knowledge resides—while leaving the remaining directions in the key space unregularized and freely editable. This design channels edits into the unregularized subspace, thereby better preserving pre-trained knowledge in the primary semantic subspace, and enables fast computation via the Sherman–Morrison–Woodbury identity. On LLaMA-3-8B, FastEdit completes 5,000 sequential edits within 4 hours and consistently achieves higher editing accuracy and stability. Moreover, it requires only a small number of pre-edit samples, drastically reducing preprocessing overhead. Our work shows that low-rank structure provides a principled way to balance editability, efficiency, and knowledge preservation.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 25400
Loading