PRO: Pseudo-label Regularized Optimization on Unlabeled Test Data

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: zero-shot classification, unsupervised learning, test-time training, CLIP
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Web-scale foundation models like CLIP have impressive zero-shot capabilities on many downstream classification tasks, but they still underperform target domain-specific supervised classifiers. This inspired researchers to investigate adaptation strategies that take advantage of unlabeled data, often via pseudolabeling. However, previous methods for adaptation can be difficult to train; poor hyperparameter choices can result in catastrophic collapses in accuracy, and absent target labels, there is little to guide the search with. In this paper, we propose Pseudo-label Regularized Optimization (PRO), which addresses the collapses in test-time adaptation without any label peeking for hyperparameter tuning. On the 18 datasets addressed in our experiments PRO improves the accuracy of ViT-B-32 by 2.5\% on average and in the best case by 6.1\% from tuning the textual encoder. Our code is available at \url{https://github.com/anonWAEWA/PRO}.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7989
Loading