Keywords: LLM agents, tool calling, diffusion language models, multi-agent systems
Abstract: The pursuit of real-time agentic interaction has driven interest in Diffusion-based Large Language Models (dLLMs) as alternatives to auto-regressive backbones, promising to break the sequential latency bottleneck. $\textbf{However, does such efficiency gains translate into effective agentic behavior?}$ In this work, we present a comprehensive evaluation of dLLMs (e.g., LLaDA, Dream) across two distinct agentic paradigms: Embodied Agents (requiring long-horizon planning) and Tool-Calling Agents (requiring precise formatting).
Contrary to the efficiency hype, our results on Agentboard and BFCL reveal a "$\textbf{bitter lesson}$": current dLLMs fail to serve as reliable agentic backbones, frequently leading to systematically failure. $\textbf{(1) In Embodied settings}$, dLLMs suffer repeated attempts, failing to branch under temporal feedback. $\textbf{(2) In Tool-Calling settings}$, dLLMs fail to maintain symbolic precision (e.g. strict JSON schemas) under diffusion noise. To assess the potential of dLLMs in agentic workflows, we introduce $\textbf{DiffuAgent}$, a multi-agent evaluation framework that integrates dLLMs as plug-and-play cognitive cores. Our analysis shows that dLLMs are effective in non-causal roles (e.g., memory summarization and tool selection) but require the incorporation of causal, precise, and logically grounded reasoning mechanisms into the denoising process to be viable for agentic tasks.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: AI/LLM Agents, Autonomous agents, tool use, function calling, agent memory, Resources and Evaluation
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 10315
Loading