Beyond Memorization: A Rigorous Evaluation Framework for Medical Knowledge Editing

ACL ARR 2025 May Submission5168 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, knowledge editing (KE) has emerged as a promising approach to update specific facts in Large Language Models (LLMs) without the need for full retraining. Despite the effectiveness in general-domain benchmarks, their applicability in complex medical domain, remains largely unexplored. Medical knowledge editing is particularly challenging, as it requires LLMs to internalize the knowledge and generalize to unseen scenarios for effective and interpretable decision-making. In this work, we propose a novel framework called MedEditBench to rigorously evaluate the effectiveness of existing KE methods in the medical domain. In MedEditBench, we introduce a new medical knowledge editing benchmark as well as three different knowledge editing paradigms, which are designed to assess the impact of different knowledge sources for editing. Our findings indicate that current knowledge extraction (KE) methods result in only superficial memorization of the injected information, failing to generalize to new scenarios. To overcome this limitation, we present self-generated rationale editing (SGR-Edit), which utilizes model-derived rationales as target knowledge for editing, thereby uncovering the underlying reasoning process and demonstrating significant improvements over existing approaches. Additionally, we offer deeper insights into medical knowledge editing, including the localization of medical knowledge in LLMs and the impact of sequential editing on evolving knowledge. This could provide practical guidance for implementing KE methods in real-world medical applications
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: model editing, medical knowledge editing
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English
Keywords: Model Editing, Large Language Models, Medical Knowledge Editing
Submission Number: 5168
Loading