On the Robustness of Editing Large Language Models

ACL ARR 2024 April Submission726 Authors

16 Apr 2024 (modified: 23 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient customization. Model editing makes it possible to manipulate specific memories of models and the behavior of language generation without retraining. However, the robustness of model editing remains an open question. This work seeks to understand the strengths and limitations of editing methods, facilitating practical applications of communicative AI. We focus on three key research questions. RQ1: Can edited LLMs behave consistently resembling communicative AI in realistic situations? RQ2: To what extent does the rephrasing of prompts lead LLMs to deviate from the edited knowledge memory? RQ3: Which knowledge features are correlated with the performance and robustness of editing? Our empirical studies uncover a substantial disparity between existing editing methods and the practical application of LLMs. On rephrased prompts that are flexible but common in realistic applications, the performance of editing experiences a significant decline. Further analysis shows that more popular knowledge is memorized better, easier to recall, and more challenging to edit effectively.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: robustness, knowledge tracing/discovering/inducing, adversarial attacks/examples/training, data influence
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study
Languages Studied: English
Submission Number: 726
Loading