aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range PerceptionDownload PDF

Published: 07 Apr 2023, Last Modified: 14 Apr 2024ICLR 2023 Workshop SR4AD HYBRIDReaders: Everyone
TL;DR: https://github.com/aimotive/aimotive_dataset
Abstract: Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Therefore, our dataset can be used for training long-range end-to-end driving or joint perception and prediction models. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection.
Track: Original Contribution
Type: Repository
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2211.09445/code)
0 Replies

Loading