TL;DR: We replace the Lp ball constraint with the Voronoi cells of the training data to produce more robust models.
Abstract: Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. Adversarial training, one of the most successful empirical defenses to adversarial examples, refers to training on adversarial examples generated within a geometric constraint set. The most commonly used geometric constraint is an $L_p$-ball of radius $\epsilon$ in some norm. We introduce adversarial training with Voronoi constraints, which replaces the $L_p$-ball constraint with the Voronoi cell for each point in the training set. We show that adversarial training with Voronoi constraints produces robust models which significantly improve over the state-of-the-art on MNIST and are competitive on CIFAR-10.
Keywords: adversarial examples, adversarial training, voronoi diagrams
Original Pdf: pdf
1 Reply
Loading