Accidental Vulnerability: Factors in Fine-Tuning that Shift Model Safeguards

ACL ARR 2025 July Submission181 Authors

24 Jul 2025 (modified: 01 Sept 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As large language models (LLMs) gain popularity, their vulnerability to adversarial attacks emerges as a primary concern. While fine-tuning models on domain-specific datasets is often employed to improve model performance, it can inadvertently introduce vulnerabilities within the underlying model. In this work, we investigate *Accidental Vulnerability*: unexpected vulnerability arising from characteristics of fine-tuning data. We begin by identifying potential correlation factors such as linguistic features, semantic similarity, and toxicity across multiple experimental datasets. We then evaluate the adversarial robustness of these fine-tuned models, analyzing persona shifts and interpretability traits to understand how dataset factors contribute to attack success rates. Lastly, we explore causal relationships that offer new insights into adversarial defense strategies, highlighting the crucial role of dataset design in preserving model alignment.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: fine-tuning, red-teaming, safety and alignment, robustness, security
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Previous URL: https://openreview.net/forum?id=jOdJikbl3u
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: Yes, I want a different area chair for our submission
Reassignment Request Reviewers: Yes, I want a different set of reviewers
Justification For Not Keeping Action Editor Or Reviewers: We respectfully request reassignment of the Action Editor and reviewers. While we appreciate their efforts, most updates to the reviews were minimal in the author-reviewer discussion despite substantial revisions and additional experiments. Key concerns such as experimental scope, confounding factors, and horizontal analysis were directly addressed but largely unacknowledged. Additionally, several comments indicated limited familiarity with the relevant literature and the constraints outlined in our limitations, suggesting a mismatch in reviewer expertise.
Software: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: Ethics Statement (after Section 7: Limitations)
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 3
B2 Discuss The License For Artifacts: No
B2 Elaboration: All datasets and frameworks (HarmBench, lm-eval) operate under the MIT license and were publicly available during the time of experimentation.
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: Yes
B4 Elaboration: Ethics Statement (after Section 7: Limitations)
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Section 3, Section 4
B6 Statistics For Data: Yes
B6 Elaboration: Section 3
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 3, Section 7: Limitations
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 3
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 3, Section 4, Section 5
C4 Parameters For Packages: Yes
C4 Elaboration: Section 3
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: No
E1 Elaboration: Used AI to troubleshoot minor bugs in Python Code for visualizations and SLURM scripts.
Author Submission Checklist: yes
Submission Number: 181
Loading