Vulnerability-Aware Instance Reweighting For Adversarial Training

Published: 20 Jul 2023, Last Modified: 20 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Adversarial Training (AT) has been found to substantially improve the robustness of deep learning classifiers against adversarial attacks. AT involves obtaining robustness by including adversarial examples in training a classifier. Most variants of AT algorithms treat every training example equally. However, recent works have shown that better performance is achievable by treating them unequally. In addition, it has been observed that AT exerts an uneven influence on different classes in a training set and unfairly hurts examples corresponding to classes that are inherently harder to classify. Consequently, various reweighting schemes have been proposed that assign unequal weights to robust losses of individual examples in a training set. In this work, we propose a novel instance-wise reweighting scheme. It considers the vulnerability of each natural example and the resulting information loss on its adversarial counterpart occasioned by adversarial attacks. Through extensive experiments, we show that our proposed method significantly improves over existing reweighting schemes, especially against strong white and black-box attacks.
Submission Length: Regular submission (no more than 12 pages of main content)
Supplementary Material: pdf
Assigned Action Editor: ~Qibin_Zhao1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1195
Loading