Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep RL

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • TL;DR: Proposing a new counterfactual-based methodology to evaluate the hypotheses generated from saliency maps about deep RL agent behavior.
  • Abstract: Saliency maps are often used to suggest explanations of the behavior of deep rein- forcement learning (RL) agents. However, the explanations derived from saliency maps are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and show that explanations suggested by saliency maps are often not supported by experiments. Our experiments suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.
  • Keywords: explainability, saliency maps, representations, deep reinforcement learning
0 Replies

Loading