Semidefinite relaxations for certifying robustness to adversarial examplesOpen Website

2018 (edited Jul 16, 2019)NeurIPS 2018Readers: Everyone
  • Abstract: Research on adversarial examples are evolved in arms race between defenders who attempt to train robust networks and attackers who try to prove them wrong. This has spurred interest in methods for certifying the robustness of a network. Methods based on combinatorial optimization compute the true robustness but do not yet scale. Methods based on convex relaxations scale better but can only yield non-vacuous bounds on networks trained with those relaxations. In this paper, we propose a new semidefinite relaxation that applies to ReLU networks with any number of layers. We show that it produces meaningful robustness guarantees across a spectrum of networks that were trained against other objectives, something previous convex relaxations are not able to achieve.
0 Replies