Trajectory Improvement and Reward Learning from Comparative Language Feedback

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Learning from Human Language Feedback, Reward Learning, Human-Robot Interaction
TL;DR: We propose an approach to learn reward function from comparative language feedback, which outperforms traditional preference-based learning method
Abstract: Learning from human feedback has gained traction in fields like robotics and natural language processing in recent years. While prior works mostly rely on human feedback in the form of comparisons, language is a preferable modality that provides more informative insights into user preferences. In this work, we aim to incorporate comparative language feedback to iteratively improve robot trajectories and to learn reward functions that encode human preferences. To achieve this goal, we learn a shared latent space that integrates trajectory data and language feedback, and subsequently leverage the learned latent space to improve trajectories and learn human preferences. To the best of our knowledge, we are the first to incorporate comparative language feedback into reward learning. Our simulation experiments demonstrate the effectiveness of the learned latent space and the success of our learning algorithms. We also conduct human subject studies that show our reward learning algorithm achieves a 23.9% higher subjective score on average and is 11.3% more time-efficient compared to preference-based reward learning, underscoring the superior performance of our method. Our website is at https://liralab.usc.edu/comparative-language-feedback/.
Supplementary Material: zip
Spotlight Video: mp4
Video: https://youtu.be/0649vsxY_eg?si=L97BBDEK8wNASG9t
Website: https://liralab.usc.edu/comparative-language-feedback/
Code: https://github.com/USC-Lira/language-preference-learning
Publication Agreement: pdf
Student Paper: yes
Submission Number: 701
Loading