REPAIR: Robust Lifelong Model Editing via Progressive Adaptive Intervention and Reintegration

ICLR 2026 Conference Submission15248 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Lifelong model editing; Large language model; Knowledge distillation; Memory pruning; Continual Learning
Abstract: Post-training large language models (LLMs) face a critical limitation: they cannot easily absorb new information or correct errors without costly retraining, which often introduces unintended side effects. We present REPAIR (**R**obust **E**diting via **P**rogressive **A**daptive **I**ntervension and **R**eintegration), a lifelong editing framework that enables precise, low-cost updates while safeguarding unrelated knowledge. REPAIR is engineered to overcome the key hurdles in model editing. It counters the instability and conflicts arising from large-scale sequential edits through a closed-loop feedback system with dynamic memory management. To enhance poor generalization from few-shot examples, it implements distribution-aware optimization, which groups similar data for more effective learning. Finally, by using frequent knowledge fusion and strong locality guards, it closes the loop on traditional, distribution-agnostic methods that fail to account for unintended ripple effects. Experiments show REPAIR boosts editing accuracy by 10\%-30\% across multiple model families and significantly reduces knowledge forgetting. This work provides a robust framework for creating reliable, scalable, and continually evolving LLMs.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 15248
Loading