Keywords: ambiguity detection, valency analysis, embodied agents, task-oriented
Abstract: Natural language instructions in human-robot interaction often contain subtle ambiguities that hinder reliable interpretation. These ambiguities arise when a single instruction can be interpreted in multiple ways, assigning conflicting semantic roles to objects, tools, or participants, potentially leading to execution failures. To address this, we propose Semantic Valency Conflict (SVC), a cognitively inspired, logit-free method for detecting ambiguity in robot-directed instructions. SVC identifies divergences in role assignments across alternative interpretations of a predicate, using large language models (LLMs) to infer context-sensitive semantic frames. Our method is model-agnostic and compatible with both open- and closed-source LLMs. SVC produces clear, structured outputs that highlight which parts of the instruction are ambiguous and indicate which predicate and its associated arguments lead to multiple or conflicting interpretations. We evaluate SVC on two datasets, AmbiK and Introspective Planning, and demonstrate that it shows strong and consistent performance in detecting subtle ambiguities in natural language instructions given to robots in safety, unambiguous, and preference-based scenarios.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: ambiguity detection,valency analysis,embodied agents,task-oriented
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 10152
Loading