Improving LLM Safety Alignment with Dual-Objective Optimization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Existing training-time safety alignment techniques for large language models (LLMs) remain vulnerable to jailbreak attacks. Direct preference optimization (DPO), a widely deployed alignment method, exhibits limitations in both experimental and theoretical contexts as its loss function proves suboptimal for refusal learning. Through gradient-based analysis, we identify these shortcomings and propose an improved safety alignment that disentangles DPO objectives into two components: (1) robust refusal training, which encourages refusal even when partial unsafe generations are produced, and (2) targeted unlearning of harmful knowledge. This approach significantly increases LLM robustness against a wide range of jailbreak attacks, including prefilling, suffix, and multi-turn attacks across both in-distribution and out-of-distribution scenarios. Furthermore, we introduce a method to emphasize critical refusal tokens by incorporating a reward-based token-level weighting mechanism for refusal learning, which further improves the robustness against adversarial exploits. Our research also suggests that robustness to jailbreak attacks is correlated with token distribution shifts in the training process and internal representations of refusal and harmful tokens, offering valuable directions for future research in LLM safety alignment. The code is available at https://github.com/wicai24/DOOR-Alignment.

Lay Summary:

Large language models (LLMs) can be tricked into generating harmful content through "jailbreak" attacks, and current safety methods aren't always effective. This research introduces a new training technique called Dual-Objective Optimization for Refusal (DOOR).

DOOR improves LLM safety by focusing on two key areas:

  • Robust Refusal Training: Teaching the model to consistently refuse unsafe requests, even if it initially starts generating problematic content.
  • Targeted Unlearning: Actively removing or suppressing harmful knowledge within the model.

An enhanced version, W-DOOR, further refines this by emphasizing critical "refusal" words (like "Sorry") during training, making the model quicker to identify and reject harmful prompts.

Experiments show that DOOR and W-DOOR significantly boost an LLM's defenses against various jailbreak attacks. [cite: 4, 26, 27] This is achieved while maintaining the model's general usefulness and without causing it to refuse safe requests too often. [cite: 200] The findings aim to help develop safer and more trustworthy AI systems.

Primary Area: Social Aspects->Safety
Keywords: LLM Safety, Alignment, DPO
Submission Number: 15093
Loading