EXTRACT: Efficient Policy Learning by Extracting Transferable Robot Skills from Offline Data

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, skill-based reinformement learning, skill learning, transfer learning, foundation models for robotics, robot learning
TL;DR: We extract discrete, meaningfully-aligned skills from offline data for efficient reinforcement learning of new tasks.
Abstract: Most reinforcement learning (RL) methods focus on learning optimal policies over low-level action spaces. While these methods can perform well in their training environments, they lack the flexibility to transfer to new tasks. Instead, RL agents that can act over useful, temporally extended skills rather than low-level actions can learn new tasks more easily. Prior work in skill-based RL either requires expert supervision to define useful skills, which is hard to scale, or learns a skill-space from offline data with heuristics that limit the adaptability of the skills, making them difficult to transfer during downstream RL. Our approach, EXTRACT, instead utilizes pre-trained vision language models to extract a discrete set of semantically meaningful skills from offline data, each of which is parameterized by continuous arguments, without human supervision. This skill parameterization allows robots to learn new tasks by only needing to learn when to select a specific skill and how to modify its arguments for the specific task. We demonstrate through experiments in sparse-reward, image-based, robot manipulation environments that EXTRACT can more quickly learn new tasks than prior works, with major gains in sample efficiency and performance over prior skill-based RL.
Supplementary Material: zip
Spotlight Video: mp4
Website: jessezhang.net/projects/extract
Publication Agreement: pdf
Student Paper: yes
Submission Number: 152
Loading