Identifying Attack-Specific Signatures in Adversarial Examples

Published: 01 Jan 2024, Last Modified: 29 Sept 2024ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The adversarial attack literature contains numerous algorithms for crafting perturbations which manipulate neural network predictions. Many of these adversarial attacks optimize inputs with the same constraints and have similar downstream impact on the models they attack. In this work, we first show how to reconstruct an adversarial perturbation, namely the difference between an adversarial example and the original natural image, from an adversarial example. Then, we classify reconstructed adversarial perturbations based on the algorithm that generated them. This pipeline, REDRL, can detect the attack algorithm used to generate a sample from only the sample itself. The ability to determine which algorithm generated an example implies that different attack algorithms actually produce unique signatures in their adversarial examples.
Loading