Concept-Based Interpretable Reinforcement Learning with Limited to No Human Labels

Published: 07 Jun 2024, Last Modified: 23 Jul 2024InterpPol @RLC-2024 CorrectpaperthatfitsthetopicEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Explainable Reinforcement Learning, Concept Bottleneck Models, Concept-based Explainability, Interpretability, XRL
TL;DR: We reduce the human annotation effort of train-time concept annotation for concept-based interpretable reinforcement learning.
Abstract: Recent advances in reinforcement learning have predominantly leveraged neural network-based policies for decision-making, yet these models often lack interpretability, posing challenges for stakeholder comprehension and trust. Concept bottleneck models offer an interpretable alternative by integrating human-understandable concepts into neural networks. However, a significant limitation in prior work is the assumption that human annotations for these concepts are readily available during training, necessitating continuous real-time input from human annotators. To overcome this limitation, we introduce a novel training scheme that enables RL algorithms to efficiently learn a concept-based policy by only querying humans to label a small set of data, or in the extreme case, without any human labels. Our algorithm, LICORICE, involves three main contributions: interleaving concept learning and RL training, using a concept ensembles to actively select informative data points for labeling, and decorrelating the concept data with a simple strategy. We show how LICORICE reduces manual labeling efforts to 500 or fewer concept labels in three environments. Finally, we present an initial study to explore how we can use powerful vision-language models to infer concepts from raw visual inputs without explicit labels at minimal cost to performance.
Submission Number: 18
Loading