On Camera and LiDAR Positions in End-to-End Autonomous Driving

Published: 11 Aug 2024, Last Modified: 20 Sept 2024ECCV 2024 W-CODA Workshop Full Paper TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: End-to-End Autonomous Driving, Sensors, CARLA
TL;DR: This paper investigates the influence of camera and LiDAR positions on the driving performance of end-to-end autonomous driving models.
Subject: Safety/explainability/robustness for end-to-end autonomous driving
Confirmation: I have read and agree with the submission policies of ECCV 2024 and the W-CODA Workshop on behalf of myself and my co-authors.
Abstract: Autonomous vehicles rely on various sensors for environment perception. The placement of the sensors turns out to be critical, as their position determines which environmental information can be perceived. In this paper, we investigate the influence of the position of three front-oriented RGB cameras and a LiDAR sensor on the end-to-end autonomous performance. Furthermore, we explore the effects of a mismatch in positioning during training and test (i.e., vehicle operation). In total, four sensor configurations are investigated. We employ the CARLA simulator and the recently published TransFuser architecture for end-to-end autonomous driving. To ensure comparability between runs despite CARLA's non-deterministic traffic manager, we collect the sensor data for all configurations in a single simulation run. We discover that sensor positions close to and above the rear mirror excel both the roof center and the very high (impractical) baseline position w.r.t. the overall driving score. A sensor position mismatch between training and testing leads to a drop in all performance metrics. However, multi-condition models trained on a mix of sensor positions significantly regain performance regarding the infraction score, thereby improving the model's robustness against domain shifts caused by sensor mismatches during training and test.
Supplementary Material: pdf
Submission Number: 3
Loading