SPECTRA: Synchronized Stereo Event-Camera Driving Dataset for Diverse Perception Tasks

Published: 21 Sept 2025, Last Modified: 14 Oct 2025NeuRobots 2025 SpotlightTalkPosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Event Camera, Deep Learning, Dense Depth Map, Dataset, Neuromorphic
TL;DR: Beyond creating SPECTRA, we produce multimodal ground truths and pseudo labels, including dense, learning-ready depth and semantic recovery of moving obstacles missing in LiDAR, which enables fair, real world evaluation of event-based perception.
Abstract: Event-based vision is emerging as a transformative technology in the field of autonomous driving, offering high temporal resolution, low latency, and robust performance under challenging conditions such as low light and high dynamic range. To support the academic and practical success of this technology, the availability of diverse and high-quality datasets is critical for training and evaluating deep learning models effectively. Recognizing this need, we introduce a novel dataset specifically designed for event-based stereo vision in autonomous driving scenarios. Our dataset combines data from event-based cameras, RGB cameras, LiDARs, and IMUs, offering a multimodal foundation for addressing a wide range of perception tasks. It includes precise ground truths for object detection, depth estimation, and pose tracking, enabling researchers to develop and benchmark models across multiple tasks. The dataset is meticulously synchronized across all sensors to facilitate the exploration of sensor fusion strategies and the development of algorithms tailored for event-based perception.
Submission Number: 4
Loading