MAML is a Noisy Contrastive LearnerDownload PDF

Anonymous

30 Sept 2021 (modified: 12 Mar 2024)NeurIPS 2021 Workshop MetaLearn Blind SubmissionReaders: Everyone
Keywords: Meta learning, contrastive learning, few shot learning
TL;DR: The Model-agnostic meta learning (MAML) algorithm is essentially a noisy contrastive learner where the noise comes from random initialization and cross-task interference.
Abstract: Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms nowadays, which achieves remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates which govern the task-specific and meta-model-centric learning respectively, the underlying learning objective of MAML still remains implicit and thus impedes a more straightforward understanding of it. In this paper, we provide a new perspective of the working mechanism of MAML. We discover that MAML is analogous to a meta-learner using a supervised contrastive objective function, where the query features are pulled towards the support features of the same class and against those of different classes, in which such contrastiveness is experimentally verified via an analysis based on the cosine similarity. Moreover, we reveal that the vanilla MAML algorithm has an undesirable interference term originating from the random initialization and the cross-task interaction. We therefore propose a simple but effective technique, zeroing trick, to alleviate such interference, where extensive experiments are then conducted on both miniImagenet and Omniglot datasets to demonstrate the consistent improvement brought by our proposed technique thus validating its effectiveness.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2106.15367/code)
0 Replies

Loading