Abstract: Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples. Many works assume an attacker has total access to the targeted model. A realistic assumption is that an attacker has access to the targeted model only by querying some input and observing its predicted class probabilities. In this paper we propose a concept of applying techniques similar to those used within evolutionary-art to generated adversarial images.
0 Replies
Loading