Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning Attack
TL;DR: This paper proposes a post-fine-tuning stage defense for harmful fine-tuning attack.
Abstract: Safety aligned Large Language Models (LLMs) are vulnerable to harmful fine-tuning attacks -- a few harmful data mixed in the fine-tuning dataset can break the LLMs's safety alignment. While several defenses have been proposed, our evaluation shows that existing defenses fail \textit{when some specific training hyper-parameters are chosen} -- a large learning rate or a large number of training epochs in the fine-tuning stage can easily invalidate the defense. To this end, we propose Antidote, a post-fine-tuning stage solution, which remains \textbf{\textit{agnostic to the training hyper-parameters in the fine-tuning stage}}. Antidote relies on the philosophy that by removing the harmful parameters, the harmful model can be recovered from the harmful behaviors, regardless of how those harmful parameters are formed in the fine-tuning stage. With this philosophy, we introduce a one-shot pruning stage after harmful fine-tuning to remove the harmful weights that are responsible for the generation of harmful content. Despite its embarrassing simplicity, empirical results show that Antidote can reduce harmful score while maintaining accuracy on downstream tasks.
Lay Summary: Large language models might go beyond control without proper safety alignment. A concrete example is that they may deliver harmful speech or conduct even more seriously improper behavior. Recent findings show that, while we are able to instruct the large language model to do good things with safety alignment, the safety alignment is too fragile that it can easily be broken **if we fine-tune the model**. Such a safety issue is named harmful fine-tuning attack.
To counter harmful fine-tuning attack, our paper propose a solution, named Antidote, to recover the model from its harmful behavior. Our high level idea is straightforward -- we identify a few harmful parameters from the model and and remove (i.e., zeroing out) them from the model. Results show that such an embarrassingly simple method can recover the fine-tuned model from the harmful state back to the safe state.
Primary Area: Social Aspects->Safety
Keywords: Harmful fine-tuning attack, Large language models
Submission Number: 721
Loading