Keywords: Test-time adaptation, Transfer Learning
TL;DR: We propose a new problem of multi-source test-time adaptation, where multiple source models are available to adapt the models. We also propose a simple yet effective method to solve this problem.
Abstract: Deep neural networks often generalizes poorly when the distribution of test samples varies from that of the training samples. Recently, some fully test-time adaptation methods have been proposed to adapt the trained model with the unlabeled test samples before prediction. Despite achieving remarkable results, these methods only involve one trained model, which could only provide certain side information for the test samples. In real-world scenarios, there could be multiple available trained models that are beneficial to the test samples and these models are complementary to each other. Consequently, to better utilize these trained models, in this paper, we propose the problem of multi-source fully test-time adaptation to adapt multiple trained models to the test samples. To achieve this, we introduce a simple yet effective method utilizing a weighted aggregation scheme and introduce two unsupervised losses. The former could adaptively assign a higher weight to a more relevant model, while the latter could jointly adapt models with online unlabeled samples. Extensive experiments on three image classification datasets show that the proposed method achieves better results than baseline methods.
Submission Number: 12
Loading