Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs?

TMLR Paper2990 Authors

10 Jul 2024 (modified: 08 Oct 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The model editing problem concerns how language models should learn new facts about the world over time. While empirical research on model editing has drawn widespread attention, the conceptual foundations of model editing remain shaky -- perhaps unsurprisingly, since model editing is essentially belief revision, a storied problem in philosophy that has eluded succinct solutions for decades. Model editing nonetheless demands a solution, since we need to be able to control knowledge within language models. With this goal in mind, this paper critiques the standard formulation of the model editing problem and proposes a formal testbed for model editing research. We first describe 12 open problems with model editing, based on challenges with (1) defining the problem, (2) developing benchmarks, and (3) assuming LLMs have editable beliefs in the first place. Many of the challenges are extremely difficult to address, e.g. determining far-reaching consequences of edits, labeling probabilistic entailments between facts, and updating beliefs of agent simulators. Next, we introduce a semi-synthetic dataset for model editing based on Wikidata, where we can evaluate edits against labels given by an idealized Bayesian agent. This enables us to say exactly how belief revision in language models falls short of a desirable epistemic standard. We encourage further research exploring settings where such a gold standard can be compared against.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Adding changes for rebuttal in first submission
Assigned Action Editor: ~Ole_Winther1
Submission Number: 2990
Loading