Few-Shot Preference Learning for Human-in-the-Loop RLDownload PDF

Published: 10 Sept 2022, Last Modified: 12 Mar 2024CoRL 2022 PosterReaders: Everyone
Keywords: preference learning, interactive learning, multi-task learning, human-in-the-loop
TL;DR: We shift the focus of reward learning from preference to the multi-task setting, and introduce a novel few-shot preference-based RL algorithm that requires 20X fewer queries than previous methods, enabling data collection from real humans.
Abstract: While reinforcement learning (RL) has become a more popular approach for robotics, designing sufficiently informative reward functions for complex tasks has proven to be extremely difficult due their inability to capture human intent and policy exploitation. Preference based RL algorithms seek to overcome these challenges by directly learning reward functions from human feedback. Unfortunately, prior work either requires an unreasonable number of queries implausible for any human to answer or overly restricts the class of reward functions to guarantee the elicitation of the most informative queries, resulting in models that are insufficiently expressive for realistic robotics tasks. Contrary to most works that focus on query selection to \emph{minimize} the amount of data required for learning reward functions, we take an opposite approach: \emph{expanding} the pool of available data by viewing human-in-the-loop RL through the more flexible lens of multi-task learning. Motivated by the success of meta-learning, we pre-train preference models on prior task data and quickly adapt them for new tasks using only a handful of queries. Empirically, we reduce the amount of online feedback needed to train manipulation policies in Meta-World by 20$\times$, and demonstrate the effectiveness of our method on a real Franka Panda Robot. Moreover, this reduction in query-complexity allows us to train robot policies from actual human users. Videos of our results can be found at \url{https://sites.google.com/view/few-shot-preference-rl/home}.
Student First Author: yes
Supplementary Material: zip
Website: https://sites.google.com/view/few-shot-preference-rl/home
Code: https://github.com/jhejna/few-shot-preference-rl/
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2212.03363/code)
28 Replies

Loading