Fast Reinforcement Learning without Rewards or Demonstrations via Auxiliary Task Examples

Published: 29 Oct 2024, Last Modified: 03 Nov 2024CoRL 2024 Workshop MRM-D PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inverse Reinforcement Learning, Multitask Learning, Manipulation
TL;DR: Learning from examples can lead to fast reinforcement learning if you schedule exploration of auxiliary tasks.
Abstract: An alternative to using hand-crafted rewards and full-trajectory demonstrations in reinforcement learning is the use of examples of completed tasks, but such an approach can be extremely sample inefficient. We introduce value-penalized auxiliary control from examples (VPACE), an algorithm that significantly improves exploration in example-based control by adding examples of simple auxiliary tasks and an above-success-level value penalty. Across both simulated and real robotic environments, we show that our approach substantially improves learning efficiency for challenging tasks, while maintaining bounded value estimates. Preliminary results also suggest that VPACE may learn more efficiently than the more common approaches of using full trajectories or true sparse rewards. Project site: https://papers.starslab.ca/vpace/.
Submission Number: 25
Loading