Imperceptible Adversarial Attack on S Channel of HSV Colorspace

Published: 01 Jan 2023, Last Modified: 11 Apr 2025IJCNN 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural network models are vulnerable to subtle but adversarial perturbations that alter the model. Adversarial perturbations are typically computed for RGB images and, therefore, are evenly distributed among RGB channels. Compared with RGB images, HSV images can express the Hue, saturation, and brightness more intuitively. We find that the adversarial perturbation in the S-channel ensures a high attack success rate, while the perturbation is small, and the visual quality of the adversarial examples is good. Using this finding, we propose an attack method, SPGD, to improve the visual quality of adversarial examples by generating perturbations on the S-channel. Based on the attack principle of the PGD method, the RGB image was converted into an HSV image. The gradient calculated by the model on the S channel was superimposed on the S channel and then combined with the non-interference H and V channels to convert back to the RGB image. The iteration stops until the attack succeed. We compare the SPGD method with the existing state-of-the-art attack methods. The results show that SPGD minimizes pixel perturbation while maintaining a high attack success rate and achieves the best results in terms of structural similarity, imperceptibility, the minimum number of iterations, and the shortest run time.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview