Keywords: Autonomous Vehicle Control, Data Driven Simulation, End-to-End Learning
Abstract: Recent studies have shown that even vast collections of data from real drivers are insufficient to train autonomous vehicle controllers capable of generalizing to the variety of situations that can occur in the real world. End-to-end reinforcement learning within simulation presents many potential advantages to learn safety critical controller directly from an agent's raw perception. Unfortunately, existing simulators lack the photorealism needed to train such machine learning models for autonomous vehicles. In this work, we present a novel data-driven simulation and training engine capable of learning end-to-end autonomous vehicle controllers without any human supervision. We demonstrate the ability of these controllers to generalize to and navigate in the real world without access to any human control commands during training. Our results validate the learned control policy onboard a full-scale autonomous vehicle, including in previously un-encountered scenarios, such as new roads and novel, complex, near-crash situations.
0 Replies
Loading