Simplifying and Stabilizing Model Selection in Unsupervised Domain Adaptation

17 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Unsupervised Domain Adaptation; Unsupervised Model Selection; Unsupervised Hyperparameter Selection
Abstract: Unsupervised domain adaptation (UDA) is a potent approach for enhancing model performance in an unlabeled target domain by leveraging relevant labeled data from a source domain. Despite the significant progress in UDA facilitated by deep learning, model selection, already a challenging task with deep models, becomes considerably more demanding in UDA scenarios due to the absence of labeled target data and substantial distribution shifts between domains. Existing model selection methods in UDA often struggle to maintain stable selections across diverse UDA methods and various UDA scenarios, frequently resulting in suboptimal or even the worst choices. This limitation significantly impairs their practicality and reliability for researchers and practitioners in the community. To address this challenge, we introduce a novel ensemble-based validation approach called EnsV, aiming to simplify and stabilize model selection in UDA. EnsV relies solely on predictions of unlabeled target data without making any assumptions about distribution shifts, offering high simplicity and versatility. Additionally, EnsV is built upon an off-the-shelf ensemble that is theoretically guaranteed to outperform the worst candidate model, ensuring high stability. In our experiments, we compare EnsV to 8 competitive model selection approaches. Our evaluation involves 12 UDA methods across 5 diverse UDA benchmarks and 5 popular UDA scenarios. The results consistently demonstrate that EnsV stands out as a highly simple, versatile, and stable approach for practical model selection in UDA scenarios.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 779
Loading