Argus: Disambiguating User Queries for Tool-Calling Agents via Uncertainty Quantification

ACL ARR 2025 May Submission7153 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Agents that bridge language understanding and tool execution are increasingly tasked with carrying out user intent in open-ended environments. However, ambiguous or infeasible user instructions frequently lead to incorrect tool invocations, system failures, and degraded user experience. Existing clarification approaches operate in unstructured token spaces and rely on general-purpose uncertainty estimation, resulting in over-clarification and inefficient question selection. We propose Argus, an information theoretic approach that leverages structured tool argument domains to resolve ambiguous tool calls through principled clarification. By operating directly on tool argument spaces rather than arbitrary text, Argus combines exploration-exploitation optimization with regret minimization to strategically select clarifying questions that maximize information gain while minimizing user interaction burden. To evaluate clarification strategies in realistic scenarios, we develop ClarifyBench, which uniquely combines dynamic user simulation with multi-turn conversational progression across five domains, addressing critical gaps in existing static evaluation approaches. Experiments demonstrate that Argus outperforms prior clarification strategies by 25% in task success while reducing unnecessary clarification upto 40%, significantly enhancing user satisfaction through reduced interaction burden.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLM/AI agents;
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 7153
Loading