Spatial-Aware Dynamic Lightweight Self-Supervised Monocular Depth EstimationDownload PDFOpen Website

Published: 01 Jan 2024, Last Modified: 22 Feb 2024IEEE Robotics Autom. Lett. 2024Readers: Everyone
Abstract: Self-supervised monocular depth estimation has attracted extensive attention in recent years. Lightweight depth estimation methods are crucial for resource-constrained edge devices. However, existing lightweight methods often encounter the challenge of limited representation capacity and increased computational resource consumption for image reconstruction. To alleviate these issues, we propose a novel spatial-aware dynamic lightweight monocular depth estimation method (SAD-Depth). Specifically, we propose a spatial-aware dynamic encoder, which can capture spatial information of the input and generate input-adaptive dynamic convolutions, thereby significantly enhancing the model's adaptability to complex scenes. Meanwhile, we propose a multi-scale sub-pixel lightweight decoder that generates high-quality depth maps while maintaining a lightweight design. Experimental results demonstrate that our proposed SAD-Depth exhibits superiority in both model size and inference speed, achieving state-of-the-art performance on the KITTI benchmark.
0 Replies

Loading