Attacking Few-Shot Classifiers with Adversarial Support PoisoningDownload PDF

18 Jun 2021, 13:23 (modified: 01 Jul 2021, 11:50)ICML 2021 Workshop AML PosterReaders: Everyone
Keywords: meta-learning, few-shot learning, poisoning, adversarial attack
TL;DR: We propose a set-based poisoning attack against deployed few-shot learners
Abstract: This paper examines the robustness of deployed few-shot meta-learning systems when they are fed an imperceptibly perturbed few-shot dataset, showing that the resulting predictions on test inputs can become worse than chance. This is achieved by developing a novel attack, Adversarial Support Poisoning or ASP, which crafts a poisoned set of examples. When even a small subset of malicious data points is inserted into the support set of a meta-learner, accuracy is significantly reduced. We evaluate the new attack on a variety of few-shot classification algorithms and scenarios, and propose a form of adversarial training that significantly improves robustness against both poisoning and evasion attacks.
2 Replies