Random Registers for Cross-Domain Few-Shot Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We find prompt tuning, as a common way to train ViT, could consistently decrease the performance on target domains, while random noises can lead to an increase; we delve into this phenomenon for an interpretation, and propose a method based on it.
Abstract: Cross-domain few-shot learning (CDFSL) aims to transfer knowledge from a data-sufficient source domain to data-scarce target domains. Although Vision Transformer (ViT) has shown superior capability in many vision tasks, its transferability against huge domain gaps in CDFSL is still under-explored. In this paper, we find an intriguing phenomenon: during the source-domain training, prompt tuning, as a common way to train ViT, could be harmful for the generalization of ViT in target domains, but setting them to random noises (i.e., random registers) could consistently improve target-domain performance. We then delve into this phenomenon for an interpretation. We find that learnable prompts capture domain information during the training on the source dataset, which views irrelevant visual patterns as vital cues for recognition. This can be viewed as a kind of overfitting and increases the sharpness of the loss landscapes. In contrast, random registers are essentially a novel way of perturbing attention for the sharpness-aware minimization, which helps the model find a flattened minimum in loss landscapes, increasing the transferability. Based on this phenomenon and interpretation, we further propose a simple but effective approach for CDFSL to enhance the perturbation on attention maps by adding random registers on the semantic regions of image tokens, improving the effectiveness and efficiency of random registers. Extensive experiments on four benchmarks validate our rationale and state-of-the-art performance. Codes and models are available at https://github.com/shuaiyi308/REAP.
Lay Summary: We find prompt tuning, as a common way to train ViT, could consistently decrease the performance on target domains, while random noises can lead to an increase; we delve into this phenomenon for an interpretation, and propose a method based on it.
Link To Code: https://github.com/shuaiyi308/REAP
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: Cross-Domain Few-Shot Learning
Submission Number: 242
Loading