Transferable adversarial attacks for multi-model systems coupling image fusion with classification models
Abstract: Image preprocessing models typically serve as the initial step in advanced visual tasks, aiming to enhance the performance of subsequent tasks. For example, multi-focus image fusion technology significantly improves the performance of downstream semantic classification tasks. However, with the advancement of adversarial attack techniques, these models are facing significant challenges. Previous research has only explored the impact of adversarial attacks on the performance of individual models, lacking an in-depth investigation into the robustness of tasks involving the combination of multiple models. This study aims to delve into the robustness issues of tasks that combine multi-focus image fusion and image classification. To address this challenge, we have designed a new adversarial attack generator specifically for scenarios that combine multi-focus image fusion with image classification. This attack method uses a decision map surrogate model and a binary weight map to precisely add adversarial perturbations to the effective information parts of multi-focus images. It also incorporates attention mechanisms and Grad-CAM technology to optimize the perturbation areas, aiming to disrupt the key features of the fused image to improve the transferability of the attack. Comprehensive experimental results show that this method significantly improves the efficiency of attacks on downstream classification tasks while maintaining the effectiveness of the fusion model.
Loading