Chained Tuning Leads to Biased Forgetting

Published: 19 Jun 2024, Last Modified: 09 Jul 2024ICML 2024 TiFA WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bias, catastrophic forgetting, large language models, safety, toxicity, evaluation
TL;DR: We investigate how different fine-tuning regimes (task ordering, fine-tuning methods, learning rate) affect the level of catastrophic forgetting for bias and safety metrics.
Abstract: Large language models (LLMs) are often fine-tuned for use on downstream tasks, though this can degrade capabilities learned during previous training. This phenomenon, often referred to as catastrophic forgetting, has important potential implications for the safety of deployed models. In this work, we first show that models trained on downstream tasks forget their safety tuning to a greater extent than models trained in the opposite order. Second, we show that forgetting disproportionately impacts safety information about certain groups. To quantify this phenomenon, we define a new metric we term biased forgetting, and conduct a systematic evaluation of the effects of several fine-tuning methods and hyperparameters on forgetting. We hope our findings can better inform methods for chaining the fine-tuning of LLMs in continual learning settings to enable training of safer and less toxic models.
Submission Number: 16
Loading