Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning

Published: 07 Jun 2024, Last Modified: 07 Jun 2024InterpPol @RLC-2024 CorrectpaperthatfitsthetopicEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement learning, interpretability, control, navigation, transparency, discrete
TL;DR: We propose a neural policy constrained to express a small number of linear behaviors, and show that it leads to improved interpretability while performing comparably to baselines in several control and navigation tasks.
Abstract: Learning inherently interpretable policies is a central challenge in the path to developing autonomous agents that humans can trust. Linear policies can justify their decisions while interacting in a dynamic environment, but their reduced expressivity prevents them from solving hard tasks. Instead, we argue for the use of piecewise-linear policies. We carefully study to what extent they can retain the interpretable properties of linear policies while reaching competitive performance with neural baselines. In particular, we propose the HyperCombinator (HC), a piecewise-linear neural architecture expressing a policy with a controllably small number of sub-policies. Each sub-policy is linear with respect to interpretable features, shedding light on the decision process of the agent without requiring an additional explanation model. We evaluate HC policies in control and navigation experiments, visualize the improved interpretability of the agent and highlight its trade-off with performance. Moreover, we validate that the restricted model class that the HyperCombinator belongs to is compatible with the algorithmic constraints of various reinforcement learning algorithms.
Submission Number: 8
Loading