Fast Imitation via Behavior Foundation Models

Published: 07 Nov 2023, Last Modified: 05 Dec 2023FMDM@NeurIPS2023EveryoneRevisionsBibTeX
Keywords: Imitation learning, reinforcement learning, foundation models
TL;DR: Pre-trained behavioral foundation models can solve a variety of imitation learning tasks with few demonstrations and without any additional learning.
Abstract: Imitation learning (IL) aims at producing agents that can imitate any behavior given a few expert demonstrations. Yet existing approaches require many demonstrations and/or running (online or offline) reinforcement learning (RL) algorithms for each new imitation task. Here we show that recent RL foundation models based on successor measures can imitate any expert behavior almost instantly with just a few demonstrations and no need for RL or fine-tuning, while accommodating several IL principles (behavioral cloning, feature matching, reward-based, and goal-based reductions). In our experiments, imitation via RL foundation models matches, and often surpasses, the performance of SOTA offline IL algorithms, and produces imitation policies from new demonstrations within seconds instead of hours.
Submission Number: 25
Loading