The effect of fine-tuning on language model toxicity

Published: 12 Oct 2024, Last Modified: 14 Nov 2024SafeGenAi OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fine-tuning, toxicity, language models, safety
TL;DR: Fine-tuning can impact toxicity rates of open LMs in hard-to-predict ways, which is demonstrated in controlled settings and in community-tuned models on Hugging Face.
Abstract: Fine-tuning language models has become increasingly popular following the proliferation of open models and improvements in cost-effective parameter efficient fine-tuning. However, fine-tuning can influence model properties such as safety. We assess how fine-tuning can impact different open models’ propensity to output toxic content. We assess the impacts of fine-tuning Gemma, Llama, and Phi models on toxicity through three experiments. We compare how toxicity is reduced by model developers during instruction-tuning. We show that small amounts of parameter-efficient fine-tuning on developer-tuned models via low-rank adaptation on a non-adversarial dataset can significantly alter these results across models. Finally, we highlight the impact of this in the wild, demonstrating how toxicity rates of models fine-tuned by community contributors can deviate in hard-to-predict ways.
Submission Number: 118
Loading