LookAhead Tuning: Safer Language Models via Partial Answer Previews

ACL ARR 2025 May Submission1982 Authors

18 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Fine-tuning enables large language models (LLMs) to adapt to specific domains, but often undermines their previously established safety alignment. To mitigate the degradation of model safety during fine-tuning, we introduce LookAhead Tuning, which comprises two simple, low-resource, and effective data-driven methods that modify training data by previewing partial answer prefixes. Both methods aim to preserve the model's inherent safety mechanisms by minimizing perturbations to initial token distributions. Comprehensive experiments demonstrate that LookAhead Tuning effectively maintains model safety without sacrificing robust performance on downstream tasks. Our findings position LookAhead Tuning as a reliable and efficient solution for the safe and effective adaptation of LLMs.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: security and privacy, fine-tuning
Contribution Types: NLP engineering experiment
Languages Studied: English
Keywords: security and privacy, fine-tuning
Submission Number: 1982
Loading