Position: Editing Large Language Models Poses Serious Safety Risks

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 Position Paper Track posterEveryoneRevisionsBibTeXCC BY-SA 4.0
TL;DR: We argue that editing LLMs poses serious risks, and discuss potential countermeasures
Abstract: Large Language Models (LLMs) contain large amounts of facts about the world. These facts can become outdated over time, which has led to the development of knowledge editing methods (KEs) that can change specific facts in LLMs with limited side effects. This position paper argues that editing LLMs poses serious safety risks that have been largely overlooked. First, we note the fact that KEs are widely available, computationally inexpensive, highly performant, and stealthy makes them an attractive tool for malicious actors. Second, we discuss malicious use cases of KEs, showing how KEs can be easily adapted for a variety of malicious purposes. Third, we highlight vulnerabilities in the AI ecosystem that allow unrestricted uploading and downloading of updated models without verification. Fourth, we argue that a lack of social and institutional awareness exacerbates this risk, and discuss the implications for different stakeholders. We call on the community to (i) research tamper-resistant models and countermeasures against malicious model editing, and (ii) actively engage in securing the AI ecosystem.
Lay Summary: Large language models (LLMs), such as the one that powers ChatGPT, store vast amounts of information about the world. This information sometimes needs to be updated with new knowledge. So-called 'knowledge editing methods' can adapt LLMs to 'know' new facts. In this paper, we consider the potential misuse of knowledge editing methods. First, we point out that these methods are easy to use, inexpensive to implement and difficult to detect, making them appealing for malicious actors. We then highlight several risks: these methods could be used to spread misinformation, manipulate opinions or cause LLMs to provide harmful answers. These altered LLMs can easily be uploaded and shared online without any checks to ensure they have not been tampered with. We urge LLM developers to build more secure systems that are harder to manipulate and strengthen safeguards. They should also develop tools that can reveal and undo changes. We also emphasize the need for greater awareness of the risks among researchers, developers, policymakers, and the public.
Primary Area: System Risks, Safety, and Government Policy
Keywords: Model Editing, Knowledge Editing, AI Safety, Malicious Editing
Submission Number: 267
Loading