Abstract: Adversarial attacks on human parsing models aim to mislead deep neural networks by injecting imperceptible perturbations to input images. In general, different human parts are connected in a closed region. The attacks do not work well if we directly transfer current adversarial attacks on standard semantic segmentation models to human parsers. In this paper, we propose an effective adversarial attack method called HPattack, for human parsing from two perspectives, i.e., sensitive pixel mining and prediction fooling. By analyzing the characteristics of human parsing tasks, we propose exploiting the human region and contour clues to improve the attack capability. To further fool the human parsers, we introduce a novel background target attack mechanism by leading the predictions away from the correct label to obtain high-quality adversarial examples. Comparative experiments on the human parsing benchmark dataset have shown that HPattack can produce more effective adversarial examples than other methods at the same number of iterations. Furthermore, HPattack also successfully attacks the Segment Anything Model (SAM) model.
Loading