Keywords: Locomotion, Reinforcement Learning, Multi-embodiment Learning
TL;DR: We propose a neural network architecture that can learn locomotion over multiple legged robot embodiments and morphologies.
Abstract: Deep Reinforcement Learning techniques are achieving state-of-the-art results in robust legged locomotion.
While there exists a wide variety of legged platforms such as quadruped, humanoids, and hexapods, the field is still missing a single learning framework that can control all these different embodiments easily and effectively and possibly transfer, zero or few-shot, to unseen robot embodiments.
To close this gap, we introduce URMA, the Unified Robot Morphology Architecture. Our framework brings the end-to-end Multi-Task Reinforcement Learning approach to the realm of legged robots, enabling the learned policy to control any type of robot morphology.
The key idea of our method is to allow the network to learn an abstract locomotion controller that can be seamlessly shared between embodiments thanks to our morphology-agnostic encoders and decoders. This flexible architecture can be seen as a first step in building a foundation model for legged robot locomotion.
Our experiments show that URMA can learn a locomotion policy on multiple embodiments that can be easily transferred to unseen robot platforms in simulation and the real world.
Supplementary Material: zip
Spotlight Video: mp4
Video: https://www.youtube.com/watch?v=BbbBAH-T7-Q&ab_channel=NicoBohlinger
Website: https://nico-bohlinger.github.io/one_policy_to_run_them_all_website/
Code: https://github.com/nico-bohlinger/one_policy_to_run_them_all
Publication Agreement: pdf
Student Paper: yes
Submission Number: 376
Loading