Keywords: Large Language Models, Knowledge Enhancement, Knowledge Editing
Abstract: The knowledge within large language models (LLMs) may become outdated quickly.
While in-context editing (ICE) is currently the most effective method for knowledge editing (KE), it is constrained by the black-box modeling of LLMs and thus lacks interpretability.
Our work aims to elucidate the superior performance of ICE in KE by analyzing the impacts of in-context new knowledge on token-wise distributions.
We observe that despite a significant boost in logits of the new knowledge, the performance of ICE is still hindered by stubborn knowledge.
Stubborn knowledge refers to facts that have gained excessive confidence during pretraining, making them hard to edit effectively.
To address this issue and further enhance the performance of ICE, we propose a novel approach termed **De**coding by **C**ontrasting **K**nowledge (**DeCK**).
DeCK derives the distribution of the next token by contrasting the logits obtained from the newly edited knowledge guided by ICE with those from the unedited parametric knowledge.
Our experiments consistently demonstrate that DeCK enhances the confidence of LLMs in edited facts.
For instance, it improves the performance of LLaMA3-8B-instruct on MQuAKE by up to 219\%, demonstrating its capability to strengthen ICE in the editing of stubborn knowledge.
DeCK can be easily integrated into any ICE method as a decoding component to enhance editing capabilities.
Our work paves the way to develop both effective and accountable KE methods for LLMs.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5912
Loading