TL;DR: We find a phenomenon that disrupting image patches' continuity (e.g., shuffle patches) affects differently on source and target domains. We delve into it for an interpretation and propose a method based on it for CDFSL.
Abstract: Vision Transformer (ViT) has achieved remarkable success due to its large-scale pretraining on general domains, but it still faces challenges when applying it to downstream distant domains that have only scarce training data, which gives rise to the Cross-Domain Few-Shot Learning (CDFSL) task. Inspired by Self-Attention's insensitivity to token orders, we find an interesting phenomenon neglected in current works: disrupting the continuity of image tokens (i.e., making pixels not smoothly transited across patches) in ViT leads to a noticeable performance decline in the general (source) domain but only a marginal decrease in downstream target domains. This questions the role of image tokens' continuity in ViT's generalization under large domain gaps. In this paper, we delve into this phenomenon for an interpretation. We find continuity aids ViT in learning larger spatial patterns, which are harder to transfer than smaller ones, enlarging domain distances. Meanwhile, it implies that only smaller patterns within each patch could be transferred under extreme domain gaps. Based on this interpretation, we further propose a simple yet effective method for CDFSL that better disrupts the continuity of image tokens, encouraging the model to rely less on large patterns and more on smaller ones. Extensive experiments show the effectiveness of our method in reducing domain gaps and outperforming state-of-the-art works. Codes and models are available at https://github.com/shuaiyi308/ReCIT.
Lay Summary: We find a phenomenon that disrupting image patches' continuity (e.g., shuffle patches) affects differently on source and target domains. We delve into it for an interpretation and propose a method based on it for CDFSL.
Link To Code: https://github.com/shuaiyi308/ReCIT
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: Cross-Domain Few-Shot Learning
Submission Number: 247
Loading