Abstract: Recent works show that adversarial attacks threaten the security of deep neural networks (DNNs). To tackle this issue, ensemble learning methods have been proposed to train multiple sub-models and improve adversarial resistance without compromising accuracy. However, these methods often come with high computational costs, including multi-step optimization to generate high-quality augmentation data and additional network passes to optimize complicated regularization. In this paper, we present the FAST ENsemble learning method (FASTEN) to significantly reduce training costs in terms of data and optimization. Firstly, FASTEN employs a single-step technique to initialize poor augmentation data and recycles optimization knowledge to enhance data quality, which considerably reduces the data generation budget. Secondly, FASTEN introduces a low-cost regularizer to increase intra-model similarity and inter-model diversity, with most of the regularization components computed without network passes, further decreasing training costs. Empirical results on various datasets and networks demonstrate that FASTEN achieves higher robustness while requiring significantly fewer resources than current methods. For example, a 5-member FASTEN speeds up the optimization process by $7\times $ and $28\times $ compared to state-of-the-art DVERGE and TRS, respectively. Moreover, FASTEN outperforms the stronger of the two methods by 26.3% and 6.1% under black-box and white-box attacks, respectively. FASTEN is also compatible with existing fast adversarial training techniques, making it an advantageous choice for enhancing robustness without incurring excessive costs. The source code is publicly available at https://github.com/mesunhlf/FASTEN .
Loading