PIAT: Parameter Interpolation based Adversarial Training for Image Classification

Published: 20 Jun 2023, Last Modified: 07 Aug 2023AdvML-Frontiers 2023EveryoneRevisionsBibTeX
Keywords: Adversarial Training, Model Robustness, Parameter Interpolation, Normalized Mean Square Error
TL;DR: We propose a novel framework termed Parameter Interpolation based Adversarial Training that makes full use of the historical information so as to address the oscillation and overfitting issues during training.
Abstract: Adversarial training has been demonstrated to be the most effective approach to defend against adversarial attacks. However, existing adversarial training methods show apparent oscillations and overfitting issues in the training process, degrading the defense efficacy. In this work, we propose a novel framework, termed Parameter Interpolation based Adversarial Training (PIAT), that makes full use of the historical information during training. Specifically, at the end of each epoch, PIAT tunes the model parameters as the interpolation of the parameters of the previous and current epochs. Besides, we suggest to use the Normalized Mean Square Error (NMSE) to further improve the robustness by aligning the relative magnitude of logits between clean and adversarial examples, rather than the absolute magnitude. Extensive experiments on several benchmark datasets and various networks show that our framework could prominently improve the model robustness and reduce the generalization error.
Supplementary Material: zip
Submission Number: 52
Loading