Generating Adversarial Examples for Robust Deception against Image Transfer and ReloadingDownload PDF

19 Dec 2020 (modified: 05 May 2023)Submitted to GI 2021Readers: Everyone
Keywords: Adversarial Examples, Robustness, Reloading
Abstract: Adversarial examples play an irreplaceable role in evaluating deep learning models' security and robustness. It is necessary and important to understand the effectiveness of adversarial examples to utilize them for model improvement. In this paper, we explore the impact of input transformation on adversarial examples. First, we discover a new phenomenon. Reloading an adversarial example from the disk or transferring it to another platform can deactivate its malicious functionality. The reason is that reloading or transferring images can reduce the pixel precision, which will counter the perturbation added by the adversary. We validate this finding on different mainstream adversarial attacks. Second, we propose a novel Confidence Iteration method, which can generate more robust adversarial examples. The key idea is to set the confidence threshold and add the pixel loss caused by image reloading or transferring into the calculation. We integrate our solution with different existing adversarial approaches. Experiments indicate that such integration can significantly increase the success rate of adversarial attacks.
Confirm No Double Submission: Yes
Withdraw On Reject: Yes
5 Replies

Loading