TL;DR: We propose a bisimulation-based metric learning algorithm to learn state-action representations for accurate offline policy evaluation (OPE) and establish the stability properties of the learned representations theoretically and empirically.
Abstract: In reinforcement learning, offline value function learning is the procedure of using an offline dataset to estimate the expected discounted return from each state when taking actions according to a fixed target policy. The stability of this procedure, i.e., whether it converges to its fixed-point, critically depends on the representations of the state-action pairs. Poorly learned representations can make value function learning unstable, or even divergent. Therefore, it is critical to stabilize value function learning by explicitly shaping the state-action representations. Recently, the class of bisimulation-based algorithms have shown promise in shaping representations for control. However, it is still unclear if this class of methods can \emph{stabilize} value function learning. In this work, we investigate this question and answer it affirmatively. We introduce a bisimulation-based algorithm called kernel representations for offline policy evaluation (\textsc{krope}). \textsc{krope} uses a kernel to shape state-action representations such that state-action pairs that have similar immediate rewards and lead to similar next state-action pairs under the target policy also have similar representations. We show that \textsc{krope}: 1) learns stable representations and 2) leads to lower value error than baselines. Our analysis provides new theoretical insight into the stability properties of bisimulation-based methods and suggests that practitioners can use these methods to improve the stability and accuracy of offline evaluation of reinforcement learning agents.
Lay Summary: Our work is concerned with reinforcement learning (RL) agents. RL agents are data-driven decision-making agents that make decisions with long-term consequences. For example, a self-driving car’s reaching a particular location is dependent on the previous decisions it made since the driving session started. As we seek to deploy such decision-making agents in the real world, it is increasingly important that we can determine whether an agent will make good or bad decisions.
One way to assess whether an RL agent will make decisions that lead to negative outcomes is to actually deploy it in the real world. However, doing so is naturally risky since it may make unsafe decisions. A safer, alternative approach is to estimate how the RL agent may perform using data collected by other agents that may have already been deployed. This approach (known as offline policy evaluation) asks the counterfactual question: “If this new RL agent had been deployed, how well would it have performed?”. Answering counterfactual questions is challenging since it involves reasoning about events that did not occur. Prior methods that have tried to counterfactually evaluate an RL agent tend to produce estimates that are unreliable, which makes them practically futile.
To improve previous methods, we explored the idea of using abstractions. At a given moment, RL agents often make decisions based on many factors, but not all of them are important (e.g., clouds do not matter when driving). We wondered if these irrelevant factors hurt the accuracy of offline policy evaluation, and found that they do. To address this limitation, we created an algorithm that learns to abstract away irrelevant factors and focus only on relevant factors. We found that our algorithm made offline policy evaluation more accurate and reliable on simulated robotic tasks. We also provided mathematical explanations for why it works. We believe that our approach brings us closer to deploying more trustworthy RL systems.
Link To Code: https://github.com/Badger-RL/krope
Primary Area: Reinforcement Learning->Batch/Offline
Keywords: reinforcement learning, representation learning, off-policy, offline policy evaluation, bisimulations, stability, value function learning, abstractions
Submission Number: 6986
Loading