Specifying Behavior Preference with Tiered Reward Functions

Published: 29 Jun 2023, Last Modified: 04 Oct 2023MFPL PosterEveryoneRevisionsBibTeX
Keywords: reinforcement learning, behavior preference, reward design
TL;DR: In reinforcement learning, we propose a partial ordering of policies to deal with behavior preference and present an environment-independent tiered reward structure that will lead to Pareto-optimal policies.
Abstract: Reinforcement-learning agents seek to maximize a reward signal through environmental interactions. As humans, our job in the learning process is to express which behaviors are preferable through designing reward functions. In this work, we consider the reward-design problem in tasks formulated as reaching desirable states and avoiding undesirable states. To start, we propose a strict partial ordering of the policy space. We prefer policies that reach the good states faster and with higher probability while avoiding the bad states longer. Then, we propose an environment-independent tiered reward structure and show it is guaranteed to induce policies that are Pareto-optimal according to our preference relation.
Submission Number: 12
Loading