Toward Safer Diffusion Language Models: Discovery and Mitigation of Priming Vulnerability

Published: 26 Jan 2026, Last Modified: 11 Apr 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: safety, jailbreak, diffusion language models
TL;DR: We identify a ciritical vulnerability of diffusion language models, and we also propose a countermeasure for mitigations.
Abstract: Diffusion language models (DLMs) generate tokens in parallel through iterative denoising, which can reduce latency and enable bidirectional conditioning. However, the safety risks posed by jailbreak attacks that exploit this inference mechanism are not well understood. In this paper, we reveal that DLMs have a critical vulnerability stemming from their iterative denoising process and propose a countermeasure. Specifically, our investigation identifies that if an affirmative token for a harmful query appears at an intermediate step, subsequent denoising can be steered toward a harmful response even in aligned models. Furthermore, we demonstrate that the vulnerability enables existing optimization-based jailbreak attacks to be applied to MDLMs. Building on this analysis, we propose a novel safety alignment method tailored to DLMs that trains models to generate safe responses from contaminated intermediate denoising steps containing affirmative tokens. Our experiments indicate that the proposed method significantly mitigates the vulnerability with minimal impact on task performance. Furthermore, our method also improves robustness against conventional jailbreak attacks. Our work underscores the need for DLM-specific safety research. Our code is available at [https://github.com/mdl-lab/dlm-priming-vulnerability](https://github.com/mdl-lab/dlm-priming-vulnerability).
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 10605
Loading