Abstract: Vision-based end-to-end steering control is a popular and challenging task in autonomous driving. Previous methods take single image or image sequence as input and predict the steering control angle by deep neural networks. The image contains rich color and texture information, but it lacks spatial information. In this work, we thus incorporate LiDAR data to provide spatial structures, and propose a novel multi-modal attention model named PilotAttnNet for end-to-end steering angle prediction. We also present a new end-to-end self-driving dataset, Pandora-Driving, which provides synchronized LiDAR and image sequences, as well as corresponding standard driving behaviors. Our dataset includes rich driving scenarios, such as urban, country, and off-road. Extensive experiments are conducted on both publicly available LiVi-Set and our Pandora-Driving dataset, showing the great performance of the proposed method.
0 Replies
Loading