Exploring Efficient Few-shot Adaptation for Vision Transformers

Published: 12 Sept 2022, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The task of Few-shot Learning (FSL) aims to do the inference on novel categories containing only few labeled examples, with the help of knowledge learned from base categories containing abundant labeled training samples. While there are numerous works into FSL task, Vision Transformers (ViTs) have rarely been taken as the backbone to FSL with few trials focusing on naive finetuning of whole backbone or classification layer. Essentially, despite ViTs have been shown to enjoy comparable or even better performance on other vision tasks, it is still very nontrivial to efficiently finetune the ViTs in real-world FSL scenarios. To this end, we propose a novel efficient Transformer Tuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The key novelties come from the newly presented Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA) for the task and backbone finetuning, individually. Specifically, in APT, the prefix is projected to new key and value pairs that are attached to each self-attention layer to provide the model with task-specific information. Moreover, we design the DRA in the form of learnable offset vectors to handle the potential domain gaps between base and novel data. To ensure the APT would not deviate from the initial task-specific information much, we further propose a novel prototypical regularization, which minimizes the similarity between the projected distribution of prefix and initial prototypes, regularizing the update procedure. Our method receives outstanding performance on the challenging Meta-Dataset. We conduct extensive experiments to show the efficacy of our model. Our model and codes will be released.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Revision for rebuttal: 1. We have clarified several points such as the selection of hyper-parameters, the training procedure of our method and the name of our modules. 2. We have corrected the typos and wrong notations in the original version. 3. We have updated the teaser figure for better understanding. 4. We have added reference and discussion of more related works. 5. As suggested by the reviewers, we have conducted a bunch of additional ablation studies to further support the efficacy of our proposed eTT, including: a) comparison between models with and without standardization in prototypical regularization, b) FiLM-like DRA structure, c) comparison with baseline methods using DINO pretrain strategy, and d) comparison with models trained on full ImageNet. These results are added to the main paper and the supplementary material. Minor Revision: 1. We have clarified several points according to the editor's suggestion. 2. We have fixed problems about citations. 3. We have added new experiments according to the reviewers' suggestion.
Assigned Action Editor: ~Marcus_Rohrbach1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 194
Loading