Domain Adaptation with Asymmetrically-Relaxed Distribution AlignmentDownload PDF

Published: 17 Apr 2019, Last Modified: 05 May 2023LLD 2019Readers: Everyone
Keywords: domain adaptation, distribution alignment, adversarial training, theory
TL;DR: Instead of strict distribution alignments in traditional deep domain adaptation objectives, which fails when target label distribution shifts, we propose to optimize a relaxed objective with new analysis, new algorithms, and experimental validation.
Abstract: Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.
3 Replies

Loading