DDD17: End-To-End DAVIS Driving DatasetDownload PDF

25 Apr 2024 (modified: 06 Jun 2017)ICML 2017 MLAV SubmissionReaders: Everyone
Abstract: Event cameras such as dynamic vision sensors (DVS) and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale image sensor frames. The DVS events represent brightness changes. They have dynamic range of >120dB and effective frame rates >1kHz with data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in end-to-end driving applications. We provide DDD17, the first open dataset of annotated DAVIS driving recordings. DDD17 has 12h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, etc., and driver steering, throttle, and brake captured from the car's on-board diagnostics interface. As an example application, we performed a very preliminary end-to-end learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data. We provide networks that compute the steering angle using a CNN and a network that includes a small recurrent neural network at the output of the CNN.
TL;DR: Introduces the first open dataset of DAVIS neuromorphic event-camera driving data with end-to-end labeling
Keywords: autonomous driving, DVS, dynamic vision sensor, event-based vision, dataset, event camera, end-to-end
0 Replies

Loading