Semantically-Guided Representation Learning for Self-Supervised Monocular DepthDownload PDF

25 Sept 2019, 19:16 (modified: 11 Mar 2020, 07:34)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Original Pdf: pdf
Code: https://github.com/tri-ml/packnet-sfm
Keywords: computer vision, machine learning, deep learning, monocular depth estimation, self-supervised learning
TL;DR: We propose a novel semantically-guided architecture for self-supervised monocular depth estimation
Abstract: Self-supervised learning is showing great promise for monocular depth estimation, using geometry as the only source of supervision. Depth networks are indeed capable of learning representations that relate visual appearance to 3D properties by implicitly leveraging category-level patterns. In this work we investigate how to leverage more directly this semantic structure to guide geometric representation learning, while remaining in the self-supervised regime. Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions. Furthermore, we propose a two-stage training process to overcome a common semantic bias on dynamic objects via resampling. Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories.
12 Replies

Loading