Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Nov 03, 2017 (modified: Nov 03, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:We introduce a simple synthetic task in order to study the phenomenon of adversarial examples. For this task, the data manifold is mathematically defined and there is an analytic characterization of the model’s decision boundary. We show that when the data is high dimensional, generalization error may be so low that model errors may only be found adversarially. Despite these errors being extremely unlikely they appear to always be close to randomly sampled "clean" data. These adversarial examples exist on the data manifold and even for a model class which can provably obtain perfect accuracy. We conclude by drawing connections to adversarial examples for other machine learning models.
TL;DR:We explore properties of adversarial examples on a synthetic task.
Keywords:Adversarial Examples, Deep Learning
Enter your feedback below and we'll get back to you as soon as possible.