Rep-Adapter: Parameter-free Automatic Adaptation of Pre-trained ConvNets via Re-parameterization

16 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Parameter-free Automatic Adaptation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Recent advances in visual pre-training have demonstrated the advantage of transferring pre-trained models to target tasks. However, different transfer learning protocols have distinctive advantages regarding target tasks, and are nontrivial to choose without repeated trial and error. This paper presents a parameter-free automatic model adaptation protocol for ConvNets, aiming at automatically balancing between fine-tuning and linear probing, by using adaptive learning rate for each convolution filters on target tasks. First, we propose Rep-Adapter, an adapter module with re-parameterization scheme, which can achieve soft balancing between the pre-trained and fine-tuned filters, and can be equivalently converted to a single weight layer, without introducing additional parameters to the inference phase. We show by theoretical analysis that Rep-Adapter can simulate a ConvNet layer with each filter fine-tuning at different learning rate. We present a simple adapter tuning protocol with Rep-Adapter to achieve automatic adaptation of pretrained models without additional search cost. Extensive experiments on various datasets with ResNet and CLIP demonstrate the superiority of our Rep-Adapter on semi-supervised, few-shot and full dataset transfer learning scenarios.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 625
Loading