ReviveEdit: Robust Sequential Editing via Dominant Subspace Preservation

16 Sept 2025 (modified: 05 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Sequential Model Editing, Large Language Model
TL;DR: We propose an dominant subspace protection identification method to ensure the model stability and the success of editing in long sequence editing.
Abstract: Sequential knowledge editing in large language models often causes catastrophic collapse of the model’s general abilities, particularly for parameter-modifying methods. Existing approaches attempt to mitigate this issue with heuristic constraints, but they lack a principled understanding of the underlying failure mechanism and overlook the structured impact of edits on model parameters. In this work, we conduct a spectral analysis and identify a key failure mechanism: the progressive corruption of the dominant singular subspace of weight matrices, a low-rank subspace that we show is both crucial for encoding general abilities and highly sensitive to perturbations. Based on this insight, we propose REVIVE, a novel plug-and-play framework that prevents model collapse by explicitly preserving this dominant subspace. REVIVE projects any given update onto the singular vector basis of the original weight matrix and removes all components that would interfere with the protected subspace. This allows new knowledge to be integrated through less critical directions without damaging the model’s core structure. Extensive experiments show that REVIVE substantially outperforms existing methods, maintaining high editing efficacy and preserving general capabilities even under extreme sequences of up to 20, 000 edits.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 6734
Loading