One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: *Statistical adversarial data detection* (SADD) detects whether an upcoming batch contains *adversarial examples* (AEs) by measuring the distributional discrepancies between *clean examples* (CEs) and AEs. In this paper, we explore the strength of SADD-based methods by theoretically showing that minimizing distributional discrepancy can help reduce the expected loss on AEs. Despite these advantages, SADD-based methods have a potential limitation: they discard inputs that are detected as AEs, leading to the loss of clean information within those inputs. To address this limitation, we propose a two-pronged adversarial defense method, named ***D***istributional-discrepancy-based ***A***dversarial ***D***efense (DAD). In the training phase, DAD first optimizes the test power of the *maximum mean discrepancy* (MMD) to derive MMD-OPT, which is *a stone that kills two birds*. MMD-OPT first serves as a *guiding signal* to minimize the distributional discrepancy between CEs and AEs to train a denoiser. Then, it serves as a *discriminator* to differentiate CEs and AEs during inference. Overall, in the inference stage, DAD consists of a two-pronged process: (1) directly feeding the detected CEs into the classifier, and (2) removing noise from the detected AEs by the distributional-discrepancy-based denoiser. Extensive experiments show that DAD outperforms current *state-of-the-art* (SOTA) defense methods by *simultaneously* improving clean and robust accuracy on CIFAR-10 and ImageNet-1K against adaptive white-box attacks. Codes are publicly available at: https://github.com/tmlr-group/DAD.
Lay Summary: *Statistical adversarial data detection* (SADD) is a powerful method that leverages distributional discrepancy to defend against *adversarial examples* (AEs). However, they discard entire batches of samples if they are detected as AEs, leading to the loss of clean information within those samples. We aim to design an adversarial defense method that leverages the effectiveness of SADD-based methods, while at the same time, preserving all the data before feeding them into a classifier. In this paper, we first prove that minimizing distributional discrepancy helps reduce the expected loss on AEs, which motivates the design of our method. We propose ***D***istributional-discrepancy-based ***A***dversarial ***D***efense (DAD). In the training phase, DAD first optimizes the test power of the *maximum mean discrepancy* (MMD) to derive MMD-OPT, which is *a stone that kills two birds*. MMD-OPT first serves as a *guiding signal* to minimize the distributional discrepancy between *clean examples* (CEs) and AEs to train a denoiser. Then, it serves as a *discriminator* to differentiate CEs and AEs during inference. Overall, in the inference stage, DAD consists of a two-pronged process: (1) directly feeding the detected CEs into the classifier, and (2) removing noise from the detected AEs by the distributional-discrepancy-based denoiser. DAD combines the strengths of SADD-based and denoiser-based methods while addressing their limitations: DAD separates CEs and AEs in the inference phase, thereby keeping the accuracy for CEs nearly unaffected (i.e., the utility of the model is unaffected). At the same time, AEs can be properly handled by the denoiser (i.e., the robustness of the model is improved). Furthermore, DAD is very efficient and can generalize well to unseen attacks, which can be deployed into real-world systems to defend against adversarial attacks.
Link To Code: https://github.com/tmlr-group/DAD
Primary Area: Deep Learning->Robustness
Keywords: adversarial defense, adversarial robustness, accuracy-robustness trade-off
Submission Number: 14032
Loading