Abstract: Despite the great success of deep neural networks (DNNs) in computer vision, they are vulnerable to adversarial attacks. Given a well-trained DNN and an image <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$x$</tex> , a malicious and imperceptible perturbation <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$\varepsilon$</tex> can be easily crafted and added to <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$x$</tex> to generate an adversarial example <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$x^{\prime}$</tex> . The output of the DNN in response to <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$x^{\prime}$</tex> will be different from that of the DNN in response to <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$x$</tex> • To shed light on how to defend DNNs against such adversarial attacks, in this paper, we apply statistical methods to model and analyze adversarial perturbations <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$\varepsilon$</tex> crafted by FGSM, PGD, and CW attacks. It is shown statistically that (1) the adversarial perturbations <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$\varepsilon$</tex> crafted by FGSM, PGD, and CW attacks can all be modelled in the Discrete Cosine Transform (DCT) domain by the Transparent Composite Model (TCM) based on generalized Gaussian (GGTCM); (2) CW attack puts more perturbation energy in the background of an image than in the object of the image, while there is no such distinction for FGSM and PGD attacks; and (3) the energy of adversarial perturbation in the case of CW attack is more concentrated on DC components than in the case of FGSM and PGD attacks.
0 Replies
Loading