Playing the Game of Universal Adversarial Perturbations

Julien Perolet, Mateusz Malinowski, Bilal Piot, Olivier Pietquin

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We study the problem of learning classifiers robust to universal adversarial perturbations. While prior work approaches this problem via robust optimization, adversarial training, or input transformation, we instead phrase it as a two-player zero-sum game. In this new formulation, both players simultaneously play the same game, where one player chooses a classifier that minimizes a classification loss whilst the other player creates an adversarial perturbation that increases the same loss when applied to every sample in the training set. By observing that performing a classification (respectively creating adversarial samples) is the best response to the other player, we propose a novel extension of a game-theoretic algorithm, namely fictitious play, to the domain of training robust classifiers. Finally, we empirically show the robustness and versatility of our approach in two defence scenarios where universal attacks are performed on several image classification datasets -- CIFAR10, CIFAR100 and ImageNet.
  • Keywords: adversarial perturbations, universal adversarial perturbations, game theory, robust machine learning
  • TL;DR: We propose a robustification method under the presence of universal adversarial perturbations, by connecting a game theoretic method (fictitious play) with the problem of robustification, and making it more scalable.
0 Replies