Backward explanations via redefinition of predicates

Published: 01 Aug 2024, Last Modified: 09 Oct 2024EWRL17EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable Reinforcement Learning, Sequence Explanation, Importance Score
TL;DR: This paper describes a novel approach to provide History eXplanation based on Predicates (HXP) for long state-action sequences without having to approximate action importance scores.
Abstract: History eXplanation based on Predicates (HXP) studies the behavior of a Reinforcement Learning (RL) agent in a sequence of agent's interactions with the environment (a history), through the prism of an arbitrary predicate [20]. To this end, an action importance score is computed for each action in the history. The explanation consists in displaying the most important actions to the user. As the calculation of an action's importance is #W[1]-hard, it is necessary for long histories to approximate the scores, at the expense of their quality. We therefore propose a new HXP method, called Backward-HXP, to provide explanations for these histories without having to approximate scores. Experiments show the ability of B-HXP to summarise long histories.
Submission Number: 51
Loading