Sample-Based Rule Extraction for Explainable Reinforcement LearningOpen Website

Published: 01 Jan 2022, Last Modified: 09 Oct 2023LOD (1) 2022Readers: Everyone
Abstract: In this paper we propose a novel, phenomenological approach to explainable Reinforcement Learning (RL). While the ever-increasing performance of RL agents surpasses human capabilities on many problems, it falls short concerning explainability, which might be of minor importance when solving toy problems but is certainly a major obstacle for the application of RL in industrial and safety-critical processes. The literature contains different approaches to increase explainability of deep artificial networks. However, to our knowledge there is no simple, agent-agnostic method to extract human-readable rules from trained RL agents. Our approach is based on the idea of observing the agent and its environment during evaluation episodes and inducing a decision tree from the collected samples, obtaining an explainable mapping of the environment’s state to the agent’s corresponding action. We tested our idea on classical control problems provided by OpenAI Gym using handcrafted rules as a benchmark as well as trained deep RL agents with two different algorithms for decision tree induction. The extracted rules demonstrate how this new approach might be a valuable step towards the goal of explainable RL.
0 Replies

Loading