Abstract: Few-shot classification (FSC) has aroused increasing attentions over years, which attempts to perform classification given a few labeled samples. In the context of transfer-learning for settling FSC, learning a general feature representation is very vital. For this purpose, our work focus on mining more information from the supervised data jointly provided by a certain amount of annotated samples and its corresponding self-supervised learning (SSL) task. To this end, we prove that the supervised losses of cross-entropy (CE) and supervised contrastive (SC) are, respectively, good at compactness and separability representations (SRs). On the basis of the above theory analysis, we further propose the joint learning of compactness and SRs (JLCSRs) for FSC. Specifically, for both original supervised data and its augmentation ones in the SSL task, it first, respectively, constructs CE loss and SC loss in the feature space. Then, joint learning is performed on the backbone network with the linear combination of above losses. The parameters of the backbone network are finally fixed to do the FSC evaluation. Extensive experiments on FSC benchmarks have demonstrated that the compactness and SRs learning can complement with each other and our method can reach comparable results with other state-of-the-art methods
Loading