Explanation Augmented Feedback in Human-in-the-Loop Reinforcement LearningDownload PDF

Anonymous

15 Oct 2020 (modified: 22 Oct 2023)HAMLETS @ NeurIPS2020Readers: Everyone
Keywords: human-in-the-loop reinforcement learning, interactive reinforcement learning, explanations, saliency map
Abstract: Human-in-the-loop Reinforcement Learning (HRL) aims to integrate human guidance with Reinforcement Learning (RL) algorithms to improve sample efficiency and performance. A common type of human guidance in HRL is binary evaluative "good" or "bad" feedback for queried states and actions. However, this type of learning scheme suffers from the problems of weak supervision and poor efficiency in leveraging human feedback. To address this, we present EXPAND (EXPlanation AugmeNted feeDback) which provides a visual explanation in the form of saliency maps from humans in addition to the binary feedback. EXPAND employs a state perturbation approach based on salient information in the state to augment the binary feedback. We choose five tasks, namely Pixel-Taxi and four Atari games, to evaluate this approach. We demonstrate the effectiveness of our method using two metrics: environment sample efficiency and human feedback sample efficiency. We show that our method significantly outperforms previous methods. We also analyze the results qualitatively by visualizing the agent's attention. Finally, we present an ablation study to confirm our hypothesis that augmenting binary feedback with state salient information results in a boost in performance.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2006.14804/code)
0 Replies

Loading