Keywords: Large Language Models, Safety, Jailbreak Attack
TL;DR: This paper analyzes the unique jailbreak vulnerabilities of Diffusion LLMs and proposes DiffuGuard, a novel training-free defense framework to mitigate them.
Abstract: The rapid advancement of Diffusion Large Language Models (dLLMs) introduces unprecedented vulnerabilities that are fundamentally distinct from Autoregressive LLMs, stemming from their iterative and parallel generation mechanisms. In this paper, we conduct an in-depth analysis of dLLM vulnerabilities to jailbreak attacks across two distinct dimensions: intra-step and inter-step dynamics. Experimental results reveal a harmful bias inherent in the standard greedy remasking strategy and identify a critical phenomenon we term Denoising-path Dependence, where the safety of early-stage tokens decisively influences the final output. These findings also indicate that while current decoding strategies constitute a significant vulnerability, dLLMs possess a substantial intrinsic safety potential. To unlock this potential, we propose **DiffuGuard**, a training-free defense framework that addresses vulnerabilities through a dual-stage approach: **Stochastic Annealing Remasking** dynamically introduces controlled randomness to mitigate greedy selection bias, while **Block-level Audit and Repair** exploits internal model representations for autonomous risk detection and guided correction. Comprehensive experiments on four dLLMs demonstrate DiffuGuard's exceptional effectiveness, reducing Attack Success Rate against six diverse jailbreak methods from **47.9%** to **14.7%** while preserving model utility and efficiency.
Submission Number: 50
Loading