Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Adversarial examples in the physical world
Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
Feb 11, 2017 (modified: Feb 11, 2017)ICLR 2017 workshop submissionreaders: everyone
Abstract:Most existing machine learning classifiers are highly vulnerable to adversarial examples.
An adversarial example is a sample of input data which has been modified
very slightly in a way that is intended to cause a machine learning classifier
to misclassify it.
In many cases, these modifications can be so subtle that a human observer does
not even notice the modification at all, yet the classifier still makes a mistake.
Adversarial examples pose security concerns
because they could be used to perform an attack on machine learning systems, even if the adversary has no
access to the underlying model.
Up to now, all previous work has assumed a threat model in which the adversary can
feed data directly into the machine learning classifier.
This is not always the case for systems operating in the physical world,
for example those which are using signals from cameras and other sensors as input.
This paper shows that even in such physical world scenarios, machine learning systems are vulnerable
to adversarial examples.
We demonstrate this by feeding adversarial images obtained from a cell-phone camera
to an ImageNet Inception classifier and measuring the classification accuracy of the system.
We find that a large fraction of adversarial examples are classified incorrectly
even when perceived through the camera.
Keywords:Supervised Learning, Computer vision
Enter your feedback below and we'll get back to you as soon as possible.