Depth-consistent Motion Blur Augmentation

ICLR 2026 Conference Submission24794 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Motion Blur, Augmentation, Segmentation, Depth estimation
Abstract: Motion blur is a ubiquitous phenomenon commonly encountered in lightweight, handheld cameras. Addressing this degradation is essential for preserving visual fidelity and ensuring the robustness of vision models for scene understanding tasks. In the literature, robustness to motion blur has been generally treated like other degradations; this despite the complex space-variant nature of motion blur due to scene dynamics and its inherent dependence on scene geometry and depth. While some recent works addressing this issue have introduced space-variant blur due to scene dynamics, they fall back on space-invariant blurring to model camera egomotion which is imperfect. This work proposes an efficient methodology to generate space-variant depth-consistent blur to model camera egomotion by leveraging depth foundation models. We refer to our approach as Depth-consistent Motion Blur Augmentation (DMBA). To demonstrate the effectiveness of DMBA in improving robustness to realistic motion blur, we provide experiments for the tasks of semantic segmentation and self-supervised monocular depth estimation. We include results for standard networks on the Cityscapes dataset for semantic segmentation and the KITTI dataset for monocular depth estimation. We also illustrate the improved generalizability of our method to complex real-world scenes by evaluating on commonly used datasets GoPro and REDS that contain real motion blur.
Supplementary Material: pdf
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 24794
Loading