Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay RandomizationDownload PDF

12 Oct 2021 (modified: 25 Nov 2024)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: Legged Locomotion, Visual Reinforcement Learning, Sim-to-real
TL;DR: Our method enables quadruped robots to traverse complex environments with obstacles of different shapes in the wild.
Abstract: Developing robust vision-guided controllers for quadrupedal robots in complex environments, with various obstacles, dynamical surroundings and uneven terrains, is very challenging. While Reinforcement Learning (RL) provides a promising paradigm for agile locomotion skills with vision inputs in simulation, it is still very challenging to deploy the RL policy in the real world. Our key insight is that aside from the discrepancy in the domain gap, in visual appearance between the simulation and the real world, the latency from the control pipeline is also a major cause of difficulty. In this paper, we propose Multi-Modal Delay Randomization (MMDR) to address this issue when training RL agents. Specifically, we simulate the latency of real hardware by using past observations, sampled with randomized periods, for both proprioception and vision. We train the RL policy for end-to-end control in a physical simulator without any predefined controller or reference motion, and directly deploy it on the real A1 quadruped robot running in the wild. We evaluate our method in different outdoor environments with complex terrains and obstacles. We demonstrate the robot can smoothly maneuver at a high speed, avoid the obstacles, and show significant improvement over the baselines. Our project page with videos is at https://mehooz.github.io/mmdr-wild/.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/vision-guided-quadrupedal-locomotion-in-the/code)
0 Replies

Loading