On the Expressivity of Markov RewardDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 OralReaders: Everyone
Keywords: Reinforcement Learning, Reward Functions, Reward, Reward Hypothesis, Markov Decision Process
Abstract: Reward is the driving force for reinforcement-learning agents. This paper is dedicated to understanding the expressivity of reward as a way to capture tasks that we would want an agent to perform. We frame this study around three new abstract notions of “task” that might be desirable: (1) a set of acceptable behaviors, (2) a partial ordering over behaviors, or (3) a partial ordering over trajectories. Our main results prove that while reward can express many of these tasks, there exist instances of each task type that no Markov reward function can capture. We then provide a set of polynomial-time algorithms that construct a Markov reward function that allows an agent to optimize tasks of each of these three types, and correctly determine when no such reward function exists. We conclude with an empirical study that corroborates and illustrates our theoretical findings.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: We study the expressivity of Markov reward functions in finite environments by inspecting what kinds of tasks such functions can express.
Supplementary Material: pdf
11 Replies

Loading