Reversible Lifelong Model Editing via Semantic Routing-Based LoRA

15 Sept 2025 (modified: 28 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: lifelong model editing, lora, llm
Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing. However, the dynamic evolution of real-world knowledge necessitates continual editing of specific knowledge within LLMs. While existing model editing methods explore modular isolation or parameter-efficient strategies, they often suffer from semantic drift or knowledge forgetting during sequential editing due to the continual updating of semantic content. To address these challenges, we propose $\textbf{SoLA}$, a $\textbf{S}$emantic r$\textbf{o}$uting-based $\textbf{L}$oR$\textbf{A}$ framework for reversible lifelong model editing. In SoLA, each edit is encapsulated as an independent LoRA module which is frozen after training and a semantic routing record is established to map it to the input semantic representation, allowing dynamic activation of LoRA modules via semantic matching. This mechanism avoids semantic drift caused by clustering updating and mitigates catastrophic forgetting from parameter sharing. Importantly, SoLA supports both insertion and deletion of edits. By simply removing key from the semantic routing, specific edits can be precisely revoked, restoring the model’s original behavior. To our knowledge, this reversible rollback editing capability is the first to be achieved in existing literature. Furthermore, SoLA integrates the decision-making process into the edited layer itself, eliminating the need for auxiliary routing networks and enabling end-to-end decision-making process. Extensive experiments across three representative tasks (document classification, question answering, and hallucination correction) demonstrate that SoLA effectively learns and retains edited knowledge, achieving accurate, efficient, and reversible lifelong model editing.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 5728
Loading