CleanEdit: Retention-Aware Pruning and Bounded Replay for Lifelong Model Editing

ICLR 2026 Conference Submission15133 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Lifelong model editing, Continual learning, Key–value editing, Memory pruning, Bounded replay
Abstract: While lifelong model editing allows deployed systems to be updated continuously, the accumulation of edits often leads to performance decay and instability. This decay stems from the unchecked growth of the edit memory, where redundant or harmful entries corrupt the model's knowledge and increase inference costs. We address this challenge with CleanEdit, a self-maintaining mechanism that actively manages the edit memory. The core of CleanEdit is a principled maintenance loop. It first diagnoses the impact of each edit by estimating its counterfactual harm. A sequential hypothesis test then makes a statistically grounded decision to prune entries identified as detrimental. To avoid losing valuable information, the supervisory signal from pruned samples is recycled for relearning via a bounded replay process. Experiments on sequential editing benchmarks demonstrate that CleanEdit significantly improves the model's post-edit performance, achieving a superior balance between retaining past knowledge and integrating new information.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 15133
Loading