On the Expressivity of Markov RewardDownload PDF

21 May 2021, 20:42 (edited 14 Jan 2022)NeurIPS 2021 OralReaders: Everyone
  • Keywords: Reinforcement Learning, Reward Functions, Reward, Reward Hypothesis, Markov Decision Process
  • TL;DR: We study the expressivity of Markov reward functions in finite environments by inspecting what kinds of tasks such functions can express.
  • Abstract: Reward is the driving force for reinforcement-learning agents. This paper is dedicated to understanding the expressivity of reward as a way to capture tasks that we would want an agent to perform. We frame this study around three new abstract notions of “task” that might be desirable: (1) a set of acceptable behaviors, (2) a partial ordering over behaviors, or (3) a partial ordering over trajectories. Our main results prove that while reward can express many of these tasks, there exist instances of each task type that no Markov reward function can capture. We then provide a set of polynomial-time algorithms that construct a Markov reward function that allows an agent to optimize tasks of each of these three types, and correctly determine when no such reward function exists. We conclude with an empirical study that corroborates and illustrates our theoretical findings.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
11 Replies

Loading