Outcome-Driven Reinforcement Learning via Variational InferenceDownload PDF

21 May 2021, 20:41 (edited 15 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Reinforcement Learning, Variational Inference, Goal Reaching
  • TL;DR: This paper presents a new probabilistic inference method for goal-directed off-policy reinforcement learning.
  • Abstract: While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://sites.google.com/view/od-ac
17 Replies