Keywords: meta-learning, few-shot learning, auxiliary task
TL;DR: We propose a novel MetaAux framework using auxiliary tasks to effectively learn a robust representation for better generalization and adaptation in unseen few-shot tasks.
Abstract: Modern meta-learning approaches produce state-of-the-art performance by imitating the test condition for few-shot learning (FSL) using episodic training. However, overfitting and memorizing corrupted labels has been a long-standing issue. Data cleansing offers a promising solution for dealing with noisy labels. Nevertheless, in FSL, data cleansing exacerbates the severity of the problem as the available training data becomes much more limited and the model is typically inadequately trained. In this work, we address overfitting in a noisy setting by exploiting auxiliary tasks to learn a better shared representation. Unsupervised auxiliary tasks are designed with no extra labeling overhead and Wasserstein distance is leveraged to align the primary and auxiliary distributions that ensure the learned knowledge is domain-invariant. Building upon the theoretical advances on PAC-Bayesian analysis, we gain ground on
deriving novel generalization bounds of meta-learning with auxiliary tasks and under the effect of noisy corruptions. Extensive experiments on FSL tasks with noisy labels are conducted to show the effectiveness and robustness of our proposed method.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies
Loading