Depth Estimation from Moving Stereo Event Cameras without Motion Cues

Published: 21 Sept 2025, Last Modified: 14 Oct 2025NeuRobots 2025 SpotlightTalkPosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Event Camera, Stereo Depth Estimation
TL;DR: A framework for depth estimation with stereo event cameras.
Abstract: Depth estimation is highly beneficial for robots performing either navigation or manipulation. Traditional cameras suffer from motion blur in dynamic and high-speed scenarios, to which event cameras are robust while also offering high temporal resolution, low latency, and high dynamic range. However, existing event-based methods require parameter tuning depending on the camera speed and require external measurements of camera motion. In this paper, we present a lightweight framework for real-time depth estimation using stereo event cameras (typically a front-end for SLAM). We propose the use of a velocity invariant event representation to remove parameter tuning due to camera speed, combined with Semi-Global Block Matching for fast depth estimation without requiring camera motion cues or external sensors. We achieve a consistent depth estimation under slow motion (extremely sparse data) and fast motion (motion blur). Our pipeline runs in real-time using only the CPU, with over 100 Hz output on the MVSEC dataset (i.e. $1.6\times$ faster than state-of-the-art), while also achieving a higher (or competitive) accuracy on publicly available datasets.
Submission Number: 2
Loading