Explainable Reasoning and Learning in Ad Hoc Teamwork
Keywords: Knowledge representation, Non-monotonic logical reasoning, Ecological rationality, Large language model, Explanation generation, Ad hoc teamwork
TL;DR: A hybrid architecture for ad hoc teamwork that integrates non-monotonic logical reasoning with prior commonsense knowledge; rapidly learned models of others; and anticipated abstract future tasks to guide decision-making and provide explanations.
Abstract: An assistive AI agent often has to collaborate with previously unseen agents and humans. Methods considered state of the art for such ad hoc teamwork use a large labeled dataset of prior observations to model the behavior of other agents and to determine the ad hoc agent's behavior. These approaches are resource-hungry, and do not support rapid incremental revisions or transparency, with the necessary resources (e.g., training examples, computation) not readily available in practical domains. Our architecture for ad hoc teamwork embeds the principles of refinement, ecological rationality, interactive learning, and explainable agency, leveraging the complementary strengths of knowledge-based and data-driven methods. For any given goal, an ad hoc agent determines its actions through non-monotonic logical reasoning with: (a) prior domain-specific commonsense knowledge; (b) models learned rapidly to predict the behavior of other agents; and (c) anticipated abstract future tasks based on generic knowledge of similar situations. Further, the agent generates relational descriptions as explanations of its decisions and those of other agents. We evaluate the capabilities of our architecture in VirtualHome, a realistic 3D simulation environment.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 5
Loading