Synthetic-to-Real Self-supervised Robust Depth Estimation via Learning with Motion and Structure Priors
Abstract: Self-supervised depth estimation from monocular cameras
in diverse outdoor conditions, such as daytime, rain, and
nighttime, is challenging due to the difficulty of learning
universal representations and the severe lack of labeled
real-world adverse data. Previous methods either rely on
synthetic inputs and pseudo-depth labels or directly apply
daytime strategies to adverse conditions, resulting in sub-
optimal results. In this paper, we present the first synthetic-
to-real robust depth estimation framework, incorporating
motion and structure priors to capture real-world knowl-
edge effectively. In the synthetic adaptation, we transfer
motion-structure knowledge inside cost volumes for better
robust representation, using a frozen daytime model to train
a depth estimator in synthetic adverse conditions. In the in-
novative real adaptation which targets to fix synthetic-real
gaps, models trained earlier identify the weather-insensitive
regions with a designed consistency-reweighting strategy
to emphasize valid pseudo-labels. We further introduce a
new regularization by gathering explicit depth distribution
prior to constrain the model facing real-world data. Ex-
periments show that our method outperforms the state-of-
the-art across diverse conditions in multi-frame and single-
frame evaluations. We achieve improvements of 7.5% in
AbsRel and 4.3% in RMSE on average for nuScenes and
Robotcar datasets (daytime, nighttime, rain). In zero-shot
evaluation on DrivingStereo (rain, fog), our method gener-
alizes better than previous ones. Our code will be released
Loading