Abstract: Large language models (LLMs) acquire vast knowledge from large text corpora, but this information can become outdated or inaccurate. Since retraining is computationally expensive, knowledge editing offers an efficient alternative—modifying internal knowledge without full retraining. These methods aim to update facts precisely while preserving the model’s overall capabilities.
While existing surveys focus on the *mechanism* of editing (e.g., parameter changes vs. external memory), they often overlook the *function* of the knowledge being edited. This survey introduces a novel, complementary **function-based taxonomy** to provide a more holistic view. We examine how different mechanisms apply to various knowledge types—**factual, temporal, conceptual, commonsense, and social**—highlighting how editing effectiveness depends on the nature of the target knowledge.
By organizing our review along these two axes, we map the current landscape, outline the strengths and limitations of existing methods, define the problem formally, survey evaluation tasks and datasets, and conclude with open challenges and future directions.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: model editing, knowledge tracing, probing
Contribution Types: Surveys
Languages Studied: English
Submission Number: 1294
Loading