Editing Large Language Models: Problems, Methods, and Opportunities

ICLR 2024 Workshop ME-FoMo Submission92 Authors

Published: 04 Mar 2024, Last Modified: 14 Apr 2024ME-FoMo 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: knowledge editing, model editing, large language models
TL;DR: An overview of editing large language models, along with an in-depth empirical analysis.
Abstract: Despite the ability to train capable LLMs, the methodology for maintaining their relevancy and rectifying errors remains elusive. To this end, the past few years have witnessed a surge in techniques for editing LLMs, the objective of which is to efficiently alter the behavior of LLMs within a specific domain without negatively impacting performance across other inputs. This paper embarks on a deep exploration of the problems, methods, and opportunities related to model editing for LLMs. In particular, we provide an exhaustive overview of the task definition and challenges associated with model editing, along with an in-depth empirical analysis of the most progressive methods currently at our disposal. We also build a new benchmark dataset to facilitate a more robust evaluation and pinpoint enduring issues intrinsic to existing techniques. Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context. The full version of this paper has been recently accepted for publication at a non-ML venue.
Submission Number: 92
Loading