Medical Image Classification Attack Based on Texture Manipulation

Published: 01 Jan 2024, Last Modified: 05 Sept 2025ICPR (12) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The security of artificial intelligence systems has received great attention, especially in the field of smart medical diagnosis in over the past few years. In order to enhance the security of smart medical systems, it is important to study adversarial attack methods to increase defense performance, and the central aspect of adversarial attacks lies in crafting effective strategies that can integrate covert malicious behaviors within the system. However, due to the diversity of medical imaging modes and dimensions, creating a unified attack approach that produces imperceptible examples with high content similarity and applies them across various medical image classification systems presents significant challenges. Most existing attack methods aim at attacking natural image classification models, which inevitably add global noise to the image and make the attack more visible, simultaneously does not taking into account that medical image classification task considers texture information more. To address this issue, we propose a new adversarial attack method based on changing texture information that utilizes the CycleGAN approach, while also incorporating AdvGAN to ensure the attack success rate. Our method can provide attacks in various medical image classification tasks. Our experiment includes two public medical image datasets, including chest X-Ray image dataset and melanoma dermoscopy dataset, which contain different imaging modes and dimensions. The results indicate that our model has superior performance in attacking medical image classification tasks in different imaging modes and dimensions compared to other state-of-the-art adversarial attack methods.
Loading