Keywords: dialogue theory, LLM agents, multi-agent systems, planning in agents, agent communication, agent coordination and negotiation, agent memory, safety and alignment for agents, grounded agents
Abstract: Large Language Model agents can seemingly plan and act, yet their language use is often treated as a thin interface for reporting. We argue that this framing is the root cause of predictable coordination failures in human-facing and multi-agent settings, including ungrounded assumptions, silent goal misalignment, brittle protocol adherence, and conversational amnesia. Drawing from classical dialogue system research on joint action, common ground, grounding, repair, and incremental processing, we re-frame dialogue as part of the planning loop itself (rather than its output). In this position paper, we do not propose a new benchmark or training method, but we provide a novel perspective and actionable requirements that can be used to design and evaluate agents. We distill this re-framing into concrete implications for agentic architecture and evaluation, including explicit representations of shared commitments, planned clarification as an action, and process metrics that measure mutual understanding rather than task completion alone. We lastly discuss how dialogue-centered requirements can inform standards and governance for safe deployment of agentic systems.
Paper Type: Short
Research Area: AI/LLM Agents
Research Area Keywords: LLM agents, multi-agent systems, planning in agents, agent communication, agent coordination and negotiation, environment interaction, agent memory, safety and alignment for agents, grounded agents
Contribution Types: Position papers, Surveys
Languages Studied: English
Submission Number: 7660
Loading