Accidental Vulnerability: Factors in Fine-Tuning that Shift Model Safeguards

Published: 25 Jul 2025, Last Modified: 12 Oct 2025COLM 2025 Workshop SoLaR PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Security and privacy concerns of LMs, red-teaming, fine-tuning, safety and alignment, robustness
TL;DR: We investigate the effect of fine-tuning on model vulnerability towards jailbreaking, and interpret it by dataset characteristics.
Abstract: As large language models (LLMs) gain popularity, their vulnerability to adversarial attacks emerges as a primary concern. While fine-tuning models on domain-specific datasets is often employed to improve model performance, it can inadvertently introduce vulnerabilities within the underlying model. In this work, we investigate \emph{Accidental Vulnerability}: unexpected vulnerability arising from characteristics of fine-tuning data. We begin by identifying potential correlation factors such as linguistic features, semantic similarity, and toxicity across multiple experimental datasets. We then evaluate the adversarial robustness of these fine-tuned models, analyzing persona shifts and interpretability traits to understand how dataset factors contribute to attack success rates. Lastly, we explore causal relationships that offer new insights into adversarial defense strategies, highlighting the crucial role of dataset design in preserving model alignment.
Submission Number: 4
Loading