One Mask to Rule Them All: On Hidden Facts after Editing and How to Find Them

ACL ARR 2026 January Submission6437 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge editing, ROME, MEMIT, Mechanistic interpretability
Abstract: Knowledge editing methods such as ROME and MEMIT update factual associations in transformer models by modifying MLP weights. While evaluated mainly by output behavior, their internal mechanism remains underexplored. We investigate whether edits rely on a common mechanism, regardless of which fact is modified. Despite fact-specific weight changes, we argue that ROME and MEMIT target the same subset of weights critical for maintaining edits. To isolate this subset, we train a compact binary mask (<10\%) over the edited weights. The mask reverses 80\% of edits on the training set and over 70\% on the test set, confirming that diverse edits share a common functional structure. Our analysis reveals that the mask reverses edits by eliminating overattention in later layers. Additionally, we show that injecting the mask during editing drops editing success from 98\% to 38\%, demonstrating that this mechanism is necessary for edits to succeed. Our finding that edits suppress rather than overwrite knowledge explains why ROME and MEMIT fail to propagate changes to related facts. The identified common functional subspace informs detection and defense against unwanted edits.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: model editing, knowledge tracing/discovering/inducing, probing
Contribution Types: Model analysis & interpretability
Languages Studied: english
Submission Number: 6437
Loading