Inverse Preference Learning: Preference-based RL without a Reward Function

Published: 20 Jun 2023, Last Modified: 03 Jul 2023ILHF Workshop ICML 2023EveryoneRevisions
Keywords: preference learning, preference-based reinforcement learning
TL;DR: We design an offline preference-based RL algorithm that does not require learning a reward function, that achieves the same performance as standard preference-based RL methods.
Abstract: Reward functions are difficult to design and often hard to align with human intent. Preference-based Reinforcement Learning (RL) algorithms address these problems by learning reward functions from human feedback. However, the majority of preference-based RL methods na"ively combine supervised reward models with off-the-shelf RL algorithms. Contemporary approaches have sought to improve performance and query complexity by using larger and more complex reward architectures such as transformers. Instead of using highly complex architectures, we develop a new and parameter-efficient algorithm, Inverse Preference Learning (IPL), specifically designed for learning from offline preference data. Our key insight is that for a fixed policy, the $Q$-function encodes all information about the reward function, effectively making them interchangeable. Using this insight, we completely eliminate the need for a learned reward function. Our resulting algorithm is simpler and more parameter-efficient. Across a suite of continuous control and robotics benchmarks, IPL attains competitive performance compared to more complex approaches that leverage transformer-based and non-Markovian reward functions while having fewer algorithmic hyperparameters and learned network parameters.
Submission Number: 8
Loading