Abstract: This paper tackles a challenging problem of monocular depth estimation aiming for partially-known environments. We propose a novel deep convolutional neural network architecture which takes an RGB image and partial depth samples to estimate an accurate full depth map of a scene. The network is equipped with a newly proposed dense depth sampling strategy and input skip connection that drastically improve estimation performance. We also introduce a novel combined loss function to encourage spatial smoothness of predicted depth maps. The evaluation result shows that our architecture achieves significant performance improvements over the baseline method on a newly created depth estimation dataset.
Loading