Multiview Consistent Physical Adversarial Camouflage Generation through Semantic Guidance

Published: 01 Jan 2024, Last Modified: 28 Sept 2024IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Real-world camouflage-based physical adversarial attacks have exhibited the capability of deceiving object detection models into predicting incorrect categories or bounding boxes. Nevertheless, a common issue with existing adversarial camouflages is their multi-view inconsistency in attack results. Specifically, the predicted category will frequently change as the observing viewpoints change. This multi-view inconsistency weakens the stealthiness of the existing adversarial camouflages and hence potentially triggers the alarm of adversarial attacks. To address this problem, we propose a novel approach for Multi-view Consistent adversarial Camouflage (MCC) generation framework. Specifically, we construct the problem of generating adversarial camouflages as a texture encoding and decoding problem for the target objects. During the encoding process, the semantic information of the target category is embedded, thereby generating adversarial camouflage with specific target semantics. Then we utilize a 3D neural renderer to generate the printable camouflage in the real world. Our approach enhances semantic constraints on adversarial camouflage in the latent space, thereby ensuring that the semantic information of the camouflage remains aligned with the specified categories, regardless of the viewpoints. As a result, the generated adversarial camouflage exhibit better stealthiness. We conduct several attack experiments against stateof-the-art object detection models within the simulated physical world. Additionally, we transfer the adversarial camouflage to the real physical world and apply it to a vehicle model. The experimental results show that our method not only has excellent attack performance, but also has significant multi-view consistency compared with other methods.
Loading