Policy graphs in action: explaining single- and multi-agent behaviour using predicates

Published: 27 Oct 2023, Last Modified: 23 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: A demonstration for a library that implements explainability methods using policy graphs, with two use cases: Cartpole and Overcooked-AI
Abstract: This demo shows that policy graphs (PGs) provide reliable explanations of the behaviour of agents trained in two distinct environments. Additionally, this work shows the ability to generate surrogate agents using PGs that exhibit accurate behavioral resemblances to the original agents and that this feature allows us to validate the explanations given by the system. This facilitates transparent integration of opaque agents into socio-technical systems, ensuring explainability of their actions and decisions, enabling trust in hybrid human-AI environments, and ensuring cooperative efficacy. We present demonstrations based on two environments and we present a work-in-progress library that will allow integration with a broader range of environments and types of agent policies.
Submission Track: Demo Track
Application Domain: None of the above / Not applicable
Clarify Domain: Behaviour of Situated Agents
Survey Question 1: Policy graphs are graph representations of the policy of a trained opaque agent. In the work presented here, we have contextualized the use of policy graphs for achieving explainability of agents with opaque policies, and we have discussed how to generate such policy graphs and use them to generate explanations and how to validate these explanations, e.g. by the creation of PG agents that mimic the behaviour of the original agent. We present a demonstration that we believe fits the workshop and that can be of interest to anyone interested in the broad concept of explainability, as the complexity of trained agents is getting increasingly higher.
Survey Question 2: The main motivation was to be able to explain the behaviour of opaque agents in general, and therefore make (physical, virtual) systems using potentially complex policies (such as trained via RL) more transparent, understandable and trustworthy for humans. Also, we believe that being able to produce explanations of complex policies can allow us to generate surrogate of such agents that are explainable by default and therefore easier to change, maintain, control and align with values.
Survey Question 3: We use policy graphs, with predicates defined ad hoc for each environment.
Submission Number: 84
Loading