Dynamic Infilling Anchors for Format-Constrained Generation in Diffusion LLMs

13 Sept 2025 (modified: 05 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Dynamic Infilling Anchors, Training-Free Method, Format-Constrained Text Generation, Diffusion Large Language Models, Controllable Generation
Abstract: Diffusion large language models (dLLMs) have recently emerged as a compelling alternative to autoregressive LLMs, offering bidirectional attention and parallel sequence generation. These properties allow dLLMs to exploit global contextual information and naturally support the integration of non-sequential constraints, making them particularly suitable for format-constrained tasks such as generating parseable JSON or reasoning–answer templates. A straightforward approach is to enforce such constraints with fixed anchors, but this often results in rigid generation spans, leading to truncated reasoning or redundant content. To overcome this limitation, we propose a training-free method, Dynamic Infilling Anchors (DIA). DIA dynamically adjusts generation length by estimating appropriate end-anchor positions before content generation, followed by iterative infilling between anchors. This flexible mechanism ensures structural correctness and semantic coherence while avoiding the inefficiencies of fixed-span methods. Experiments on reasoning-oriented benchmarks demonstrate that DIA substantially improves both format compliance and answer accuracy, achieving significant gains on GSM8K and MATH under zero-shot settings. These results highlight the promise of dLLMs for reliable, structure-aware generation and establish DIA as a practical pathway toward robust format-constrained text generation.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 4747
Loading