Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Local Stability of Adversarial Examples
I-Jeng Wang, Michael Pekala, Evan Fuller
Feb 12, 2018 (modified: Feb 12, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Neural networks’ lack of stability to “small” perturbations of the input signal is a topic of substantial interest. Recent work has explored a number of properties of these so-called adversarial examples (AE) in an attempt to further understand this phenomenon. The present work continues in this spirit and provides an explicit characterization of stability for AE derived from very small perturbations. We also suggest future directions for how this characterization might be used in practice to mitigate the impact of these AE.
TL;DR:A theoretical result on the local stability of adversarial examples and preliminary results on its implication for potential defense schemes.
Keywords:Adversarial Examples, Local Stability
Enter your feedback below and we'll get back to you as soon as possible.