D2Defend: Dual-Domain based Defense against Adversarial ExamplesDownload PDFOpen Website

Published: 2021, Last Modified: 29 Oct 2023IJCNN 2021Readers: Everyone
Abstract: Convolutional neural networks (CNNs) have recently been widely applied in computer vision tasks, yet they are seriously vulnerable to imperceptible adversarial perturbations. Such phenomena have caused great attention on the adversary topic. Existing adversarial defense methods mainly focus on improving the robustness of models (e.g., adversarial training) or removing adversarial perturbations (e.g., input-transformation based methods) directly, while rarely considering the accurate recovery of image structures of the input, which also play a vital role in making predictions for CNNs. To this end, we propose a Dual-Domain based Defense (D2Defend) method by recovering low-frequency and high-frequency image structures in both spatial and transform domains, while removing adversarial perturbations simultaneously. Unlike the existing input-transformation based methods, our method can decompose the input image into edge feature and texture feature layers, accompanied with bilateral filtering and short-time fourier transform (STFT) filtering. Experimental results demonstrate the effectiveness of our method against various adversarial attacks, and show the superiority of our method over other adversarial defense methods especially at strong adversarial strength.
0 Replies

Loading