Keywords: Reinforcement Learning, Foundation Models, Robotics, VLMs
Abstract: Reinforcement learning (RL) is a promising approach for solving robotic manipulation tasks.
However, it is challenging to apply the RL algorithms directly in the real world.
For one thing, RL is data-intensive and typically requires millions of interactions with environments, which are impractical in real scenarios.
For another, it is necessary to make heavy engineering efforts to design reward functions manually.
To address these issues, we leverage foundation models in this paper.
We propose Reinforcement Learning with Foundation Priors (RLFP) to utilize guidance and feedback from policy, value, and success-reward foundation models.
Within this framework, we introduce the Foundation-guided Actor-Critic (FAC) algorithm, which enables embodied agents to explore more efficiently with automatic reward functions.
The benefits of our framework are threefold: (1) \textit{sample efficient}; (2) \textit{minimal and effective reward engineering}; (3) \textit{agnostic to foundation model forms and robust to noisy priors}. Our method achieves remarkable performances in various manipulation tasks on both real robots and in simulation. Across 5 dexterous tasks with real robots, FAC achieves an average success rate of 86\% after one hour of real-time learning.
Across 8 tasks in the simulated Meta-world, FAC achieves 100\% success rates in 7/8 tasks under less than 100k frames (about 1-hour training), outperforming baseline methods with manual-designed rewards in 1M frames.
We believe the RLFP framework can enable future robots to explore and learn autonomously in the physical world for more tasks.
Supplementary Material: zip
Website: https://yewr.github.io/rlfp
Code: https://github.com/YeWR/RLFP
Publication Agreement: pdf
Student Paper: yes
Submission Number: 144
Loading