Deep Reinforcement Learning Explanation via Model TransformsDownload PDF

12 Oct 2021 (modified: 05 May 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: deep reinforcement learning, explainability, model abstractions
TL;DR: We use formal MDP abstractions and transforms, previously used for expediting planning, to automatically explain discrepancies between the behavior of a DRL agent and the behavior that is anticipated by an observer.
Abstract: Understanding the emerging behaviors of deep reinforcement learning agents may be difficult because such agents are often trained using highly complex and expressive models. In recent years, most approaches developed for explaining agent behaviors rely on domain knowledge or on an analysis of the agent’s learned policy. For some domains, relevant knowledge may not be available or may be insufficient for producing meaningful explanations. We suggest using formal model abstractions and transforms, previously used mainly for expediting the search for optimal policies, to automatically explain discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. We formally define this problem of Reinforcement Learning Policy Explanation (RLPE), suggest a class of transforms which can be used for explaining emergent behaviors, and suggest methods for searching efficiently for an explanation. We demonstrate the approach on standard benchmarks.
Supplementary Material: zip
0 Replies

Loading