FastDiSS: Few-step Match Many-step Diffusion Language Model on Sequence-to-Sequence Generation

ACL ARR 2026 January Submission2641 Authors

03 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diffusion language model, text-to-text generation, machine translation
Abstract: Self-conditioning has been central to the success of continuous diffusion language models, as it allows models to correct previous errors. Yet its ability degrades precisely in the regime where diffusion is most attractive for deployment: few-step sampling for fast inference. In this study, we show that when models only have a few denoising steps, inaccurate self-conditioning induces a substantial approximation gap; this mistake compounds across denoising steps and ultimately dominate the sample quality. To address this, we propose a novel training framework that handles these errors during learning by perturbing the self-conditioning signal to match inference noise, improving robustness to prior estimation errors. In addition, we introduce a token-level noise-awareness mechanism that prevents training from saturation, hence improving optimization. Extensive experiments across conditional generation benchmarks demonstrate that our framework surpasses standard continuous diffusion models while providing up to 400x faster inference speed, and remains competitive against other one-step diffusion frameworks.
Paper Type: Long
Research Area: Natural Language Generation
Research Area Keywords: efficient models, text-to-text generation, robustness
Contribution Types: Model analysis & interpretability, Reproduction study, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 2641
Loading