Keywords: large language models, safety, fine-tuning
TL;DR: We study fine-tuning risks associated with closed source large language models, showing malicious users can increase harmfulness by modifying almost any task-specific dataset and providing a novel mitigation strategy based on mimicking user data.
Abstract: Fine-tuning large language models on task-specific datasets can enhance their performance on downstream tasks. However, recent research shows that fine-tuning on benign, instruction-following data can inadvertently undo safety alignment and increase a model's propensity to comply with harmful queries. Although critical, understanding and mitigating safety risks in well-defined tasks remains distinct from the instruction-following context due to structural differences in the data. Our work explores the risks associated with fine-tuning closed source models across diverse task-specific data. We demonstrate how malicious actors can subtly manipulate the structure of almost *any* task-specific dataset to foster significantly more dangerous model behaviors, while maintaining an appearance of innocuity and reasonable downstream task performance. To mitigate this issue, we propose a novel strategy that mixes in safety data which *mimics* the format and style of the user data, showing this is more effective than the baselines at re-establishing safety while maintaining similar task performance.
Submission Number: 36
Loading