Abstract: Deep learning is increasingly employed in medical imaging data mining, nevertheless, deep learning models are vulnerable to adversarial examples. By adding perturbations that are invisible to the human eye to the original medical images, the adversarial examples can mislead the deep learning models to make incorrect judgments. However, the existing defenses against adversarial attacks are not fully effective. In this paper, we propose a highly effective adversarial defense method, named ADDM, which restores the adversarial example back to the original example through a diffusion model. Further, we use six adversarial attacks to test the performance of adversarial defenses and select four popular defenses to compare with ADDM. Experiments conducted on two medical benchmark datasets have shown that our method is more stable and efficient than the other defenses and can ensure the high robustness of the deep learning models when facing adversarial examples.
Loading