MetaCL: a semi-supervised meta learning architecture via contrastive learning

Published: 01 Jan 2024, Last Modified: 01 Nov 2024Int. J. Mach. Learn. Cybern. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Meta learning aims to endow models with the ability to quickly learn new tasks based on existing knowledge. However, recent works have relied on complex structures and prior information to improve performance on few-shot tasks. To this end, we propose MetaCL, a meta learning architecture that uses only a traditional backbone without any priors. MetaCL takes distorted versions of an episode of samples as input and outputs predictions respectively. Besides, we introduce an unsupervised loss to minimize component redundancy and maximize variability, achieving soft-whitening and soft-alignment constraints. We evaluate MetaCL on few-shot tasks of image classification datasets CUB and miniImageNet, and experimetal results proves that MetaCL outperforms other meta-learning methods. MetaCL can be treated as a simple yet effective baseline and also be easily integrated into other few-shot models for additional performance gains.
Loading