End-To-End Multi-Modal Sensors Fusion System For Urban Automated Driving

Ibrahim Sobh, Loay Amin, Sherif Abdelkarim, Khaled Elmadawy, Mahmoud Saeed, Omar Abdeltawab, Mostafa Gamal, Ahmad El Sallab

Oct 10, 2018 NIPS 2018 Workshop MLITS Submission readers: everyone
  • Abstract: In this paper, we present a novel framework for urban automated driving based on multi-modal sensors; LiDAR and Camera. Environment perception through sensors fusion is key to successful deployment of automated driving systems, especially in complex urban areas. Our hypothesis is that a well designed deep neural network is able to end-to-end learn a driving policy that fuses LiDAR and Camera sensory input, achieving the best out of both. In order to improve the generalization and robustness of the learned policy, semantic segmentation on camera is applied, in addition to applying our new LiDAR post processing method; Polar Grid Mapping (PGM). The system is evaluated on the recently released urban car simulator, CARLA. The evaluation is measured according to the generalization performance from one environment to another. The experimental results show that the best performance is achieved by fusing the PGM and semantic segmentation.
  • Keywords: End-to-end learning, Conditional imitation learning, Sensors fusion
0 Replies

Loading