TL;DR: We argue that editing LLMs poses serious risks, and discuss potential countermeasures
Abstract: Large Language Models (LLMs) contain large amounts of facts about the world. These facts can become outdated over time, which has led to the development of knowledge editing methods (KEs) that can change specific facts in LLMs with limited side effects. This position paper argues that editing LLMs poses serious safety risks that have been largely overlooked. First, we note the fact that KEs are widely available, computationally inexpensive, highly performant, and stealthy makes them an attractive tool for malicious actors. Second, we discuss malicious use cases of KEs, showing how KEs can be easily adapted for a variety of malicious purposes. Third, we highlight vulnerabilities in the AI ecosystem that allow unrestricted uploading and downloading of updated models without verification. Fourth, we argue that a lack of social and institutional awareness exacerbates this risk, and discuss the implications for different stakeholders. We call on the community to (i) research tamper-resistant models and countermeasures against malicious model editing, and (ii) actively engage in securing the AI ecosystem.
Lay Summary: Large language models (LLMs), such as the one that powers ChatGPT, store vast amounts of information about the world. This information sometimes needs to be updated with new knowledge. So-called 'knowledge editing methods' can adapt LLMs to 'know' new facts.
In this paper, we consider the potential misuse of knowledge editing methods. First, we point out that these methods are easy to use, inexpensive to implement and difficult to detect, making them appealing for malicious actors. We then highlight several risks: these methods could be used to spread misinformation, manipulate opinions or cause LLMs to provide harmful answers. These altered LLMs can easily be uploaded and shared online without any checks to ensure they have not been tampered with. We urge LLM developers to build more secure systems that are harder to manipulate and strengthen safeguards. They should also develop tools that can reveal and undo changes. We also emphasize the need for greater awareness of the risks among researchers, developers, policymakers, and the public.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: ZDJiM
Permissions Form: pdf
Primary Area: System Risks, Safety, and Government Policy
Keywords: Model Editing, Knowledge Editing, AI Safety, Malicious Editing
Submission Number: 267
Loading