Manydepth2: Motion-Aware Self-Supervised Monocular Depth Estimation in Dynamic Scenes

Published: 01 Jan 2025, Last Modified: 04 Aug 2025IEEE Robotics Autom. Lett. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite advancements in self-supervised monocular depth estimation, challenges persist in dynamic scenarios due to the dependence on assumptions about a static world. In this paper, we present Manydepth2, to achieve precise depth estimation for both dynamic objects and static backgrounds, all while maintaining computational efficiency. To address the challenges introduced by dynamic content, we incorporate optical flow into monocular depth estimation, allowing our model to distinguish between dynamic and static regions in multi-frame inputs. We then construct a motion-aware cost volume across multiple frames by incorporating dynamic region information, which is used for accurate depth estimation. Furthermore, to improve the accuracy and robustness of the network architecture, we propose an attention-based depth network that effectively integrates information from feature maps at different resolutions by incorporating both channel and non-local attention mechanisms. Compared to methods with similar computational costs, Manydepth2 achieves a significant reduction of approximately five percent in root-mean-square error for self-supervised monocular depth estimation on the KITTI-2015 dataset.
Loading