Adaptive Language-Guided Abstraction from Contrastive Explanations

Published: 05 Sept 2024, Last Modified: 13 Sept 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: reward learning, language-guided abstraction, reward features
TL;DR: We propose an algorithm to iteratively specify missing features and then update the reward parameters for reward learning.
Abstract: Many approaches to robot learning begin by inferring a reward function from a set of human demonstrations. To learn a good reward, it is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward. In particularly complex, high-dimensional environments, human demonstrators often struggle to fully specify their desired behavior from a small number of demonstrations. End-to-end reward learning methods (e.g., using deep networks or program synthesis techniques) often yield brittle reward functions that are sensitive to spurious state features. By contrast, humans can often generalizably learn from a small number of demonstrations by incorporating strong priors about what features of a demonstration are likely meaningful for a task of interest. How do we build robots that leverage this kind of background knowledge when learning from new demonstrations? This paper describes a method named ALGAE which alternates between using language models to iteratively identify human-meaningful features needed to explain demonstrated behavior, then standard inverse reinforcement learning techniques to assign weights to these features. Experiments across a variety of both simulated and real-world robot environments show that ALGAElearns generalizable reward functions defined on interpretable features using only small numbers of demonstrations. Importantly, ALGAE can recognize when features are missing, then extract and define those features without any human input -- making it possible to quickly and efficiently acquire rich representations of user behavior.
Publication Agreement: pdf
Student Paper: yes
Supplementary Material: zip
Submission Number: 422
Loading