Keywords: actionability, perception interpretation, cognitive-robotic agents, cognitive agents, language-endowed intelligent agents
Abstract: Semantically interpreting and grounding multimodal stimuli is a core requirement of cognitive robotic systems, but it is challenging because inputs can be fragmented, ambiguous, underspecified, ill-formed, and conveyed through noisy channels. This means that agents, like people, need to be able to determine when their understanding is actionable—i.e., sufficient to support reason-ing about action—even if it is imperfect or incomplete. When an interpretation is not actionable, the agent has to decide what to do, such as wait and see what happens or seek clarification through dialog. This paper demonstrates that it is possible to model actionability assessment, as well as recovery from non-actionable interpretations, without drowning in real-world complexity by mod-eling agents as collaborative social agents. Like human apprentices, such agents can take best guesses in benign contexts, ask clarification questions, and generally rely on their human partners to share the responsibility for achieving a successful collaboration. The paper also briefly com-ments on another use of the term actionability, which involves the agent’s ability to actually carry out an action that it understands it should do. The models reported in the paper are implemented in Language-Endowed Intelligent Agents configured within the HARMONIC neurosymbolic
architecture.
Paper Track: Technical paper
Submission Number: 17
Loading