Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent studies have uncovered a troubling vulnerability in the fine-tuning stage of large language models (LLMs): even fine-tuning on entirely benign datasets can lead to a significant increase in the harmfulness of LLM outputs. Building on this finding, our red teaming study takes this threat one step further by developing a more effective attack. Specifically, we analyze and identify samples within benign datasets that contribute most to safety degradation, then fine-tune LLMs exclusively on these samples. We approach this problem from an outlier detection perspective and propose Self-Inf-N, to detect and extract outliers for fine-tuning. Our findings reveal that fine-tuning LLMs on 100 outlier samples selected by Self-Inf-N in the benign datasets severely compromises LLM safety alignment. Extensive experiments across seven mainstream LLMs demonstrate that our attack exhibits high transferability across different architectures and remains effective in practical scenarios. Alarmingly, our results indicate that most existing mitigation strategies fail to defend against this attack, underscoring the urgent need for more robust alignment safeguards. Codes are available at https://github.com/GuanZihan/Benign-Samples-Matter.
Lay Summary: We discovered a surprising and dangerous weakness in how the safety alignment of large language models (LLMs), like Llama, is broken in the fine-tuning stage: Even if you train these models on completely harmless, “benign” text, the model’s safety can still break down. We took this vulnerability further by designing a targeted attack. Instead of using random examples from a benign dataset, we carefully selected just 100 specific examples — the most “unusual” or “outlier” ones — using a new method we developed, called Self-Inf-N. When fine-tuned only on these samples, models started generating much more harmful content. This attack works across many popular LLMs, showing it's not just a fluke with one model. Worse, common defense strategies fail to stop it. Our work highlights an urgent problem: even seemingly safe training data can quietly undermine a model’s safety. Stronger safeguards are needed to ensure LLMs remain trustworthy and aligned, especially as they are increasingly used in sensitive areas.
Link To Code: https://github.com/GuanZihan/Benign-Samples-Matter
Primary Area: Deep Learning->Large Language Models
Keywords: Fine-tuning, Safety Alignment, Data Selection
Submission Number: 12855
Loading