Transferable Physical Adversarial Patch Attack for Remote Sensing Object Detection

Published: 2024, Last Modified: 26 Jun 2025IGARSS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks (DNNs) have been widely used in remote sensing but demonstrated to be vulnerable with adversarial examples. By adding elaborately designed perturbations on the clean images, DNNs may output wrong prediction. Research on adversarial attack contributes to the study of model robustness. However, previous methods mainly focus on white-box scenario or digital domain for classification tasks, while the vulnerability of remote sensing detectors has not been fully explored. Aiming at attacking black-box remote sensing detectors in physical domain, we propose to generate a transferable physical adversarial patch (TPAP) as the perturbations. Specifically, the initial patch is optimized by a U-Net and modified by the plane mask and position mask before applied to the clean image. By attacking a surrogate model, TPAP can be transferred to the target model. Abundant experimental results validate the attack ability of TPAP and evaluate the robustness of current one-stage detectors.
Loading