Abstract: Physical adversarial attacks in object detection have become an attractive topic. Many works have proposed adversarial patches or camouflage to perform successful attacks in the real world, but all of these methods have drawbacks, especially for 3D humans. One is that the camouflage-based method is not dynamic or mimetic enough. That is, the adversarial texture is not rendered in conjunction with the background features of the target, which somehow violates the definition of adversarial examples; the other is that there is no detailing of non-rigid physical surfaces, such that the rendered textures are not robust and very rough in 3D scenarios. In this paper, we propose the Mimic Octopus Attack (MOA) to overcome the above gap, a novel method for generating a mimetic and robust physical adversarial texture to target objects to camouflage them against detectors with Multi-View and Multi-Scene. To achieve joint optimization, it utilizes the combined iterative training of mimetic style loss, adversarial loss, and human eye intuition. Experiments in specific scenarios of CARLA, which is generally recognized as an alternative to physical domains, demonstrate its advanced performance, resulting in a 67.62% decrease in mAP@0.5 for the YOLO-V5 detector compared to normal, an average increase of 4.14% compared to the state-of-the-art attacks and an average ASR of up to 85.28%. Besides, the robustness in attacking diverse populations and detectors of MOA proves its outstanding transferability.
0 Replies
Loading