Distributionally Robust Learning for Uncertainty Calibration under Domain ShiftDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Domain shift, uncertainty estimation, calibration, distributional robustness, unsupervised domain adaptation, semi-supervised learning
Abstract: We propose a framework for learning calibrated uncertainties under domain shifts. We consider the case where the source (training) distribution differs significantly from the target (test) distribution. We detect such domain shifts through the use of binary domain classifier and integrate it with the task network and train them jointly end-to-end. The binary domain classifier yields a density ratio that reflects the closeness of a target (test) sample to the source (training) distribution. We employ it to adjust the uncertainty of prediction in the task network. This idea of using the density ratio is based on the distributionally robust learning (DRL) framework, which accounts for the domain shift through adversarial risk minimization. We demonstrate that our method generates calibrated uncertainties that benefit many downstream tasks, such as unsupervised domain adaptation (UDA) and semi-supervised learning (SSL). In these tasks, methods like self-training and FixMatch use uncertainties to select confident pseudo-labels for re-training. Our experiments show that the introduction of DRL leads to significant improvements in cross-domain performance. We also demonstrate that the estimated density ratios show agreement with the human selection frequencies, suggesting a match with a proxy of human perceived uncertainties.
One-sentence Summary: We propose a distributionally robust method for uncertainty estimation under domain shift.
Supplementary Material: zip
18 Replies

Loading