Two-stream convolutional networks for end-to-end learning of self driving cars

Sep 20, 2018 NIPS 2018 Workshop Spatiotemporal Blind Submission readers: everyone
  • Keywords: Spatiotemporal modelling, motion prediction, autonomous navigation, self-driving cars, computer vision, deep learning, multitasking learning, real-time decision making
  • TL;DR: We propose an autonomous navigation system with spatiotemporal cues using CNNs and multitask learning.
  • Abstract: We propose a methodology to extend the concept of Two-Stream Convolutional Networks to perform end-to-end learning for self-driving cars with temporal cues. The system has the ability to learn spatiotemporal features by simultaneously mapping raw images and pre-calculated optical flows directly to steering commands. Although optical flows encode temporal-rich information, we found that 2D-CNNs are prone to capturing features only as spatial representations. We show how the use of Multitask Learning favors the learning of temporal features via inductive transfer from a shared spatiotemporal representation. Preliminary results demonstrate a competitive improvement of 30% in prediction accuracy and stability compared to widely used regression methods trained on the Comma.ai dataset.
0 Replies

Loading