On sensitivity of meta-learning to support dataDownload PDF

21 May 2021, 20:48 (modified: 26 Oct 2021, 18:22)NeurIPS 2021 PosterReaders: Everyone
Keywords: meta-learning, sensitivity, robustness
TL;DR: We demonstrate that meta-learning algorithms applied to few-shot learning are extremely sensitive to the support data.
Abstract: Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4\% or as high as 95\% on standard few-shot image classification benchmarks. We explain our empirical findings in terms of class margins, which in turn suggests that robust and safe meta-learning requires larger margins than supervised learning.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: zip
17 Replies