Keywords: Large Language Model, Model Editing
TL;DR: We introduce a diagnostic framework for editing LLM knowledge, showing how fact traits shape edit success, and reveal “Generative Aphasia,” where precise edits preserve QA accuracy but break long-form fluency.
Abstract: Model editing, the process of efficiently modifying factual knowledge in pre-trained language models, is critical for maintaining their accuracy and relevance. However, existing editing methods often introduce unintended side effects, degrading model performance in unpredictable ways. While much research has focused on improving editing algorithms, the role of the target knowledge's intrinsic properties remains a significant, underexplored factor. This paper addresses this gap by first proposing the ``Knowledge Spectrum,'' a systematic framework for categorizing knowledge based on its real-world popularity, the model's pre-edit familiarity, and the linguistic structure of the eliciting question. Our empirical analysis reveals that these characteristics are strong predictors of editing success and stability. Informed by these findings, we introduce the "Knowledge-Diagnostic Framework," an adaptive strategy that tailors editing intensity to the diagnosed difficulty of a knowledge item. We demonstrate that this framework significantly improves success rates for challenging edits while optimizing computational resources. Our work provides a more comprehensive understanding of the factors governing model editing.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14561
Loading