Keywords: Transfer Learning
TL;DR: We study vision-language model adaptation (VLMA), a new unsupervised model adaptation framework that positions a pre-trained VLM as the source model and transfers it towards various unlabelled downstream datasets.
Abstract: Traditional model adaptation framework assumes the same vocabulary across pre-training and downstream datasets, which often struggles with limited transfer flexibility and efficiency while handling downstream datasets with different vocabularies.
Inspired by recent vision-language models (VLMs) that enable visual recognition defined by free-form texts via reasoning on both images and texts, we study vision-language model adaptation (VLMA), a new unsupervised model adaptation framework that positions a pre-trained VLM as the source model and transfers it towards various unlabelled downstream datasets.
To this end, we propose a Hough voting-based Self-Training (HoughST) technique that introduces a multimodal Hough voting mechanism to exploit the synergy between vision and language to mitigate the distribution shift in image and text modalities simultaneously.
Specifically, HoughST makes use of the complementary property of different types of features within and across vision and language modalities, which enables joint exploitation of vision and language information and effective learning of image-text correspondences in the unlabelled downstream datasets.
Additionally, HoughST captures temporal information via temporal Hough voting which helps memorize and leverage previously learnt downstream dataset information.
Extensive experiments show that HoughST outperforms the state-of-the-art consistently across 11 image recognition tasks.
Codes will be released.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2909
Loading