Improving Tiled Evolutionary Adversarial Attack

Published: 01 Jan 2023, Last Modified: 13 Jul 2025PKDD/ECML Workshops (2) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Adversarial examples are a well-known phenomenon in image classification. They represent maliciously altered inputs that a deep learning model classifies incorrectly, even though the added noise is almost indistinguishable to the human eye. Defense against adversarial examples can be either proactive or reactive. This paper builds upon previous work, which tests one of the state-of-the-art reactive defenses. While the previous work managed to defeat the defense using an evolutionary attack, a notable drawback was the visible adversarial noise. This work improves this by utilizing the Structural Similarity Index (SSIM) for measuring the distance between benign and adversarial inputs, and by implementing a new mutation during the evolution process. These adjustments not only created adversarial images with less visible noise, but also accelerated the process of generating them.
Loading