Learning Visual Parkour from Generated Images

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generative AI, Simulation, Legged Locomotion, Sensory Motor-learning
TL;DR: Train a State-of-the-art quadruped parkour policy on generated data
Abstract: Fast and accurate physics simulation is an essential component of robot learning, where robots can explore failure scenarios that are difficult to produce in the real world and learn from unlimited on-policy data. Yet, it remains challenging to incorporate RGB-color perception into the sim-to-real pipeline that matches the real world in its richness and realism. In this work, we train a robot dog in simulation for visual parkour. We propose a way to use generative models to synthesize diverse and physically accurate image sequences of the scene from the robot's ego-centric perspective. We present demonstrations of zero-shot transfer to the RGB-only observations of the real world on a robot equipped with a low-cost, off-the-shelf color camera.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://lucidsim.github.io
Publication Agreement: pdf
Student Paper: yes
Submission Number: 47
Loading