Imitation Learning for Generalizable Self-driving Policy with Sim-to-real TransferDownload PDF

Published: 27 Apr 2022, Last Modified: 22 Oct 2023ICLR 2022 GPL PosterReaders: Everyone
Keywords: Imitation Learning, Sim-to-real, Domain Randomization, Domain Adaptation, self-driving policy learning
TL;DR: In this paper, we used Imitation Learning techniques to solve a complex self-driving robotics task. Our results demonstrate that it is favorable to use DAgger as it achieves the best performance with slightly more training time compared to BC.
Abstract: Imitation Learning uses the demonstrations of an expert to uncover the optimal policy and it is suitable for real-world robotics tasks as well. In this case, however, the training of the agent is carried out in a simulation environment due to safety, economic and time constraints. Later, the agent is applied in the real-life domain using sim-to-real methods. In this paper, we apply Imitation Learning methods that solve a robotics task in a simulated environment and use transfer learning to apply these solutions in the real-world environment. Our task is set in the Duckietown environment, where the robotic agent has to follow the right lane based on the input images of a single forward-facing camera. We present three Imitation Learning and two sim-to-real methods capable of achieving this task. A detailed comparison is provided on these techniques to highlight their advantages and disadvantages.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2206.10797/code)
1 Reply

Loading