Revisiting Semi-supervised Adversarial Training via Noise-aware Online Robust Distillation

Published: 06 Mar 2025, Last Modified: 22 Apr 2025ICLR 2025 Workshop Data Problems PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Robustness, Semi-supervised Learning
TL;DR: The paper found that high-quality pseudo labels matter more than strong pretrained models in semi-supervised adversarial training, and presented SNORD, a simple framework achieving SOTA robustness and performance under extremely low-labeling regimes.
Abstract: Training adversarially robust models under a low-labeling regime is crucial for real-world deployment. Robust self-training (RST), with standard training for pseudo labels followed by adversarial robust training, has emerged as a key paradigm in this setting. Recent advancements in RST primarily focus on leveraging strong pre-trained models to improve robustness and performance. However, we find that these methods often overlook the critical role of pseudo labels in the training pipeline, leading to worse results on extremely low labeling regimes (< 5\%). In this work, we introduce SNORD, a simple yet effective approach that significantly improves robustness by enhancing pseudo-label quality in the first stage and effectively managing label noise in the second stage leveraging advanced standard semi-supervised learning techniques. Experiments on CIFAR-10, CIFAR-100, and TinyImageNet-200 demonstrate that SNORD outperforms prior methods by up to 22\% in robust accuracy under low-labeling conditions. Furthermore, compared to fully supervised adversarial training, SNORD achieves 90\% relative robust accuracy under $\ell_{\infty} = 8/255$ AutoAttack, requiring only 0.1\%, 2\%, and 10\% labeled data for the three commonly used benchmarks, respectively. Additional analyses validate the contribution of each component and show that SNORD can be seamlessly integrated with existing adversarial pretraining strategies to further enhance robustness.
Submission Number: 32
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview