Simplifying and Stabilizing Model Selection in Unsupervised Domain Adaptation

Published: 28 Oct 2023, Last Modified: 02 Apr 2024DistShift 2023 PosterEveryoneRevisionsBibTeX
Keywords: Unsupervised Domain Adaptation; Unsupervised Model Selection; Unsupervised Hyperparameter Selection
Abstract: Ensuring reliable model selection is crucial for unleashing the full potential of advanced unsupervised domain adaptation (UDA) methods to improve model performance in unlabeled target domains. However, existing model selection methods in UDA often struggle to maintain reliable selections across diverse UDA methods and scenarios, suffering from highly risky worst-case selections. This limitation significantly hinders their practicality and reliability for researchers and practitioners in the community. In this paper, we introduce EnsV, a novel ensemble-based approach that makes pivotal strides in reliable model selection by avoiding the selection of the worst model. EnsV is built on an off-the-shelf ensemble that is theoretically guaranteed to outperform the worst candidate model, ensuring high reliability. Notably, EnsV relies solely on predictions of unlabeled target data without making any assumptions about domain distribution shifts, offering high simplicity and versatility for various practical UDA problems. In our experiments, we compare EnsV to 8 competitive model selection approaches. Our evaluation involves 12 UDA methods across 5 diverse UDA benchmarks and 5 popular UDA scenarios. The results consistently demonstrate that EnsV stands out as a highly simple, versatile, and reliable approach for practical model selection in UDA scenarios. Code is available at \url{https://github.com/LHXXHB/EnsV}.
Submission Number: 81
Loading