Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy LearningDownload PDF

Published: 10 Sept 2022, Last Modified: 05 May 2023CoRL 2022 OralReaders: Everyone
Keywords: Reinforcement Learning, Interactive Perception, Task Specification
TL;DR: We present LIRF, a framework for conveniently training example-based interactive robot policies to evaluate robot task policies, in order both to provide rewards to train them, and to verify their execution in partially observed tasks.
Abstract: Physical interactions can often help reveal information that is not readily apparent. For example, we may tug at a table leg to evaluate whether it is built well, or turn a water bottle upside down to check that it is watertight. We propose to train robots to acquire such interactive behaviors automatically, for the purpose of evaluating the result of an attempted robotic skill execution. These evaluations in turn serve as "interactive reward functions" (IRFs) for training reinforcement learning policies to perform the target skill, such as screwing the table leg tightly. In addition, even after task policies are fully trained, IRFs can serve as verification mechanisms that improve online task execution. For any given task, our IRFs can be conveniently trained using only examples of successful outcomes, and no further specification is needed to train the task policy thereafter. In our evaluations on door locking and weighted block stacking in simulation, and screw tightening on a real robot, IRFs enable large performance improvements, even outperforming baselines with access to demonstrations or carefully engineered rewards.
Student First Author: yes
Supplementary Material: zip
Website: https://sites.google.com/view/lirf-corl-2022/
Code: https://github.com/penn-pal-lab/interactive_reward_functions
23 Replies

Loading