Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization

ACL ARR 2024 June Submission1632 Authors

14 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: To ensure large language models contain up-to-date knowledge, they need to be updated regularly. However, model editing is challenging as it might also affect knowledge that is unrelated to the new data. State-of-the-art methods identify parameters associated with specific knowledge and then modify them via direct weight updates. However, these locate-and-edit methods suffer from heavy computational overhead and lack theoretical validation. In contrast, directly fine-tuning the model on requested edits affects the model's behavior on unrelated knowledge, and significantly damages the model's generation fluency and consistency. To address these challenges, we propose SAUL, a streamlined model editing method that uses **s**entence concatenation with **au**gmented random facts for generation regu**l**arization. Evaluations on three model editing benchmarks show that \saul is a practical and reliable solution for model editing outperforming state-of-the-art methods while maintaining generation quality and reducing computational overhead.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: Continual learning, Fine-tuning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1632
Loading