End-To-End Multi-Modal Sensors Fusion System For Urban Automated DrivingDownload PDF

Published: 24 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop MLITS SubmissionReaders: Everyone
Abstract: In this paper, we present a novel framework for urban automated driving based on multi-modal sensors; LiDAR and Camera. Environment perception through sensors fusion is key to successful deployment of automated driving systems, especially in complex urban areas. Our hypothesis is that a well designed deep neural network is able to end-to-end learn a driving policy that fuses LiDAR and Camera sensory input, achieving the best out of both. In order to improve the generalization and robustness of the learned policy, semantic segmentation on camera is applied, in addition to applying our new LiDAR post processing method; Polar Grid Mapping (PGM). The system is evaluated on the recently released urban car simulator, CARLA. The evaluation is measured according to the generalization performance from one environment to another. The experimental results show that the best performance is achieved by fusing the PGM and semantic segmentation.
Keywords: End-to-end learning, Conditional imitation learning, Sensors fusion
4 Replies

Loading