Segment-Level Diffusion: A Framework for Controllable Long-Form Generation with Diffusion Language Models
Abstract: Diffusion models have shown promise in text generation, but often struggle with generating long, coherent, and contextually accurate text.
Token-level diffusion doesn't model word-order dependencies explicitly and operates on short, fixed output windows, while passage-level diffusion struggles with learning robust representations for long-form text. To address these challenges, we propose Segment-Level Diffusion (SLD), a framework that enhances diffusion-based text generation through text segmentation, robust representation training with adversarial and contrastive learning, and improved latent-space guidance. By segmenting long-form outputs into multiple latent representations and decoding them with an autoregressive decoder, SLD simplifies diffusion predictions and improves scalability. Experiments on four datasets demonstrate that, when compared to other diffusion and autoregressive baselines SLD achieves competitive or superior fluency, coherence, and contextual compatibility in automatic and human evaluations.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Diffusion Model, Controlled Generation, Dialogue, Long-form
Languages Studied: English
Submission Number: 3407
Loading