Keywords: Mechanistic Interpretability, Model Poisoning, Neuroplasticity
TL;DR: Mechanistic analysis of task-specific fine-tuning, with the analysis extending to toxic-fine-tuning and relearning after toxic fine-tuning.
Abstract: Previous research has shown that fine-tuning language models on general tasks enhance their underlying mechanisms. However, the impact of fine-tuning on poisoned data and the resulting changes in these mechanisms are poorly understood. Additionally, prior work has shown that language models exhibit behaviors of neuroplasticity when pruning and then retraining, we explore the existence of this behavior via fine-tuning a corrupted model (i.e., a model trained on corrupted data) on the original dataset. This study investigates the changes in a model's mechanisms during toxic fine-tuning and identifies the primary corruption mechanisms. We also analyze the changes after retraining on the original dataset and observe neuroplasticity behaviors, where the model relearns original mechanisms after fine-tuning the corrupted model. Our findings indicate that; (i) Underlying mechanisms are amplified across task-specific fine-tuning which can be generalized to longer epochs, (ii) Model corruption via toxic fine-tuning is localized to specific circuit components, (iii) Models exhibit neuroplasticity when retraining corrupted models on clean dataset, reforming the original model mechanisms.
Submission Number: 98
Loading