Supervised Domain Adaptation Based on Marginal and Conditional Distributions Alignment

TMLR Paper2372 Authors

12 Mar 2024 (modified: 25 Apr 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Supervised domain adaptation (SDA) is an area of machine learning, where the goal is to achieve good generalization performance on data from a target domain, given a small corpus of labeled training data from the target domain and a large corpus of labeled data from a related source domain. In this work, based on a generalization of a well-known theoretical result of \citet{ben2010theory}, we propose an SDA approach, in which the adaptation is performed by aligning the marginal and conditional components of the input-label joint distributions. In addition to being theoretically grounded, we demonstrate that the proposed approach has two advantages over existing SDA approaches. First, it applies to a broad collection of learning tasks, such as regression, classification, multi-label classification, and few-shot learning. Second, it takes into account the geometric structure of the input and label spaces. Experimentally, despite its generality, our approach demonstrates on-par or superior results compared with recent state-of-the-art task-specific methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The changes, highlighted in red, are listed in the response to each reviewer.
Assigned Action Editor: ~Rémi_Flamary1
Submission Number: 2372
Loading