Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real DataDownload PDF

Published: 10 Sept 2022, Last Modified: 05 May 2023CoRL 2022 PosterReaders: Everyone
Keywords: Sim-to-Real, Reinforcement Learning, Autonomous Driving
TL;DR: Sim-to-Real for end-to-end off-road autonomous driving without any real-world training data.
Abstract: Autonomous driving is complex, requiring sophisticated 3D scene understanding, localization, mapping, and control. Rather than explicitly modelling and fusing each of these components, we instead consider an end-to-end approach via reinforcement learning (RL). However, collecting exploration driving data in the real world is impractical and dangerous. While training in simulation and deploying visual sim-to-real techniques has worked well for robot manipulation, deploying beyond controlled workspace viewpoints remains a challenge. In this paper, we address this challenge by presenting Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving, without using any real-world data. This is done by learning to translate randomized simulation images into simulated segmentation and depth maps, subsequently enabling real-world images to also be translated. This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world. Our approach, which can be trained in 48 hours on 1 GPU, can perform equally as well as a classical perception and control stack that took thousands of engineering hours over several months to build. We hope this work motivates future end-to-end autonomous driving research.
Student First Author: yes
Supplementary Material: zip
Website: https://sites.google.com/view/sim2segcorl2022/home?pli=1
Code: https://github.com/rll-research/sim2seg
14 Replies

Loading