Keywords: Model Editing, Massive Editing, Large Language Models
Abstract: Model editing techniques are essential for efficiently updating knowledge in
large language models (LLMs). However, the effectiveness of existing approaches
degrades in massive editing scenarios, particularly when evaluated with
practical metrics. Their robustness is also limited in context-rich settings or
when editing multiple facts of the same subject simultaneously. We attribute
these failures to the embedding misalignment among knowledge items, which
undermines editing reliability at scale. To address this, we propose EAMET
(Embedding Alignment Model Editing in Transformers), which addresses this issue
by aligning the space of key and residual embeddings. Extensive experiments
across six LLMs and three datasets demonstrate that EAMET consistently
outperforms existing methods, achieving about 90\% editing efficacy when editing
10k facts.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 4398
Loading