Implicit Task-Driven Probability Discrepancy Measure for Unsupervised Domain AdaptationDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: probability discrepancy measure, unsupervised domain adaptation
TL;DR: Warping the probability discrepancy measure towards the end tasks can significantly improve unsupervised domain adaptation.
Abstract: Probability discrepancy measure is a fundamental construct for numerous machine learning models such as weakly supervised learning and generative modeling. However, most measures overlook the fact that the distributions are not the end-product of learning, but are the basis of downstream predictor. Therefore it is important to warp the probability discrepancy measure towards the end tasks, and we hence propose a new bi-level optimization based approach so that the two distributions are compared not uniformly against the entire hypothesis space, but only with respect to the optimal predictor for the downstream end task. When applied to margin disparity discrepancy and contrastive domain discrepancy, our method significantly improves the performance in unsupervised domain adaptation, and enjoys a much more principled training process.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
Code: https://www.dropbox.com/sh/8e2enu3mwl7oxwk/AAAT8_xqkyLzLMqxqFH6tTjWa?dl=0
11 Replies

Loading