Meta-Reward-Net: Implicitly Differentiable Reward Learning for Preference-based Reinforcement LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 27 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: preference-based reinforcement learning, human-in-the-loop reinforcement learning, deep reinforcement learning, bi-level optimization
Abstract: Setting up a well-designed reward function has been challenging for many reinforcement learning applications. Preference-based reinforcement learning (PbRL) provides a new framework that avoids reward engineering by leveraging human preferences (i.e., preferring apples over oranges) as the reward signal. Therefore, improving the efficacy of data usage for preference data becomes critical. In this work, we propose Meta-Reward-Net (MRN), a data-efficient PbRL framework that incorporates bi-level optimization for both reward and policy learning. The key idea of MRN is to adopt the performance of the Q-function as the learning target. Based on this, MRN learns the Q-function and the policy in the inner level while updating the reward function adaptively according to the performance of the Q-function on the preference data in the outer level. Our experiments on robotic simulated manipulation tasks and locomotion tasks demonstrate that MRN outperforms prior methods in the case of few preference labels and significantly improves data efficiency, achieving state-of-the-art in preference-based RL. Ablation studies further demonstrate that MRN learns a more accurate Q-function compared to prior work and shows obvious advantages when only a small amount of human feedback is available. The source code and videos of this project are released at https://sites.google.com/view/meta-reward-net.
TL;DR: A novel preference-based RL method to improve feedback efficiency by incorporating bi-level optimization for reward learning.
Supplementary Material: pdf
21 Replies

Loading