NoiLin: Improving adversarial training and correcting stereotype of noisy labels

Published: 30 Jun 2022, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Adversarial training (AT) formulated as the minimax optimization problem can effectively enhance the model's robustness against adversarial attacks. The existing AT methods mainly focused on manipulating the inner maximization for generating quality adversarial variants or manipulating the outer minimization for designing effective learning objectives. However, empirical results of AT always exhibit the robustness at odds with accuracy and the existence of the cross-over mixture problem, which motivates us to study some label randomness for benefiting the AT. First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain some observations on when NL injection benefits AT. Second, based on the observations, we propose a simple but effective method---NoiLIn that randomly injects NLs into training data at each training epoch and dynamically increases the NL injection rate once robust overfitting occurs. Empirically, NoiLIn can significantly mitigate the AT's undesirable issue of robust overfitting and even further improve the generalization of the state-of-the-art AT methods. Philosophically, NoiLIn sheds light on a new perspective of learning with NLs: NLs should not always be deemed detrimental, and even in the absence of NLs in the training set, we may consider injecting them deliberately.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: ---First changes 1 Correct typos that were highlighted by reviewers. 2 Add discussion about ''Relation with Label Manipulations Benefiting Robustness". 3 Add more experiments on "NoiLIn with extra unlabeled training data". 4 Conduct more experiments and add confidence intervals for Figure 4 and Table 1. 5 Add more experiments on "NoiLIn with Batches that Consist of both Natural and Adversarial Examples". --- Second changes 1 Modify the abstracts and introductions, i.e., remove the statement that "overlaps in noiseless datasets" 2 Weaken the arguments of Figure 3(b). 3 Integrate the confidence intervals into the Table 1. --- Camera-ready changes 1 Add the code link.
Code: https://github.com/zjfheart/NoiLIn
Assigned Action Editor: ~Pin-Yu_Chen1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 25
Loading