FourierSampler: Unlocking Non-Autoregressive Potential in Diffusion Language Models via Frequency-Guided Generation

ACL ARR 2026 January Submission5164 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Language Model, Fourier Transform, Frequency Analysis
Abstract: Diffusion Large Language Models, or dLLM, have become a widely discussed topic in NLP recently due to their arbitrary-order decoding feature and their potential to capture more complex semantics and achieve generation from structure to detail. Despite this, existing work finds that dLLMs demonstrate positional bias or fail to fully unlock the potential of non-autoregressive generation, which has sparked research on dLLM decoding strategies. Current decoding strategies primarily rely on external signal intervention to optimize dLLM decoding, lacking sufficient exploration of the dLLM's internal characteristics. Inspired by signal processing theory and its applications in NLP, we first introduce frequency-domain analysis into dLLM and propose FourierSampler, which leverages a frequency-domain sliding window on hidden states to guide dLLMs to first decode structural content dominated by low-frequency signals, then decode detailed content dominated by high-frequency signals. We conduct validation experiments on LLaDA and SDAR, and find that FourierSampler can consistently achieve improvements in code and math tasks, surpassing existing methods as well as auto-regressive models of the same size.
Paper Type: Long
Research Area: Natural Language Generation
Research Area Keywords: inference methods, model architectures
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 5164
Loading