Resolving UnderEdit & OverEdit with Iterative & Neighbor-Assisted Model Editing

ACL ARR 2025 February Submission6392 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) are used in various downstream language tasks, making it crucial to keep their knowledge up-to-date, but both retraining and fine-tuning the model can be costly. Model editing offers an efficient and effective alternative by a single update to only a key subset of model parameters. While being efficient, these methods are not perfect. Sometimes knowledge edits are unsuccessful, i.e., UnderEdit or the edit contaminated neighboring knowledge that should remain unchanged, i.e., OverEdit. To address these limitations, we propose $\textbf{iterative model editing}$, based on our hypothesis that a single parameter update is often insufficient, to mitigate UnderEdit, and $\textbf{neighbor-assisted model editing}$, which incorporates neighboring knowledge during editing to minimize OverEdit. Extensive experiments demonstrate that our methods effectively reduce UnderEdit up to 38 percentage points and OverEdit up to 6 percentage points across multiple model editing algorithms, LLMs, and benchmark datasets.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: Knowledge Editing, Optimization
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 6392
Loading