AdvOps: Decoupling adversarial examples

Published: 01 Jan 2024, Last Modified: 13 Nov 2024Pattern Recognit. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•We find that the prediction of adversarial examples can be decoupled into the summation of clean samples and perturbations in terms of the model prediction, which can be a useful tool to gain insight into the underlying relationship between the inputs and the outputs.•We propose a generative model-based method to craft the adversarial perturbation, which satisfies the decoupling principle and simultaneously has superior attack performance. Moreover, the decouple loss is devised to guide the generative model for ensuring the decoupling principle.•We conduct extensive experiments against different networks on the complex ImageNet and simple CIFAR10 datasets. Experiment results suggest that the proposed method outperforms the comparison methods by a large margin in the devised metric that balance the attack performance and decoupling principle.
Loading