Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial RobustnessDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023AAAI-22 AdvML Workshop LongPaperReaders: Everyone
Keywords: Adversarial robustness, ensemble learning, random gated network, parameter sharing
TL;DR: Ensemble-in-One augments a network into multi-path random-gated network to provide great scalability for conducting ensemble. The adversarial robustness of CNN model can be significantly improved without sacrificing the clean accuracy much.
Abstract: Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples with small perturbations to fool the convolutional neural networks (CNNs). Ensemble training methods are promising to facilitate better adversarial robustness by diversifying the vulnerabilities among the sub-models, simultaneously maintaining comparable accuracy as standard training. Previous practices also demonstrate that enlarging the ensemble can improve the robustness. However, existing ensemble methods are with poor scalability, owing to the rapid complexity increase when including more sub-models in the ensemble. Moreover, it is usually infeasible to train or deploy an ensemble with substantial sub-models, owing to the tight hardware resource budget and latency requirement. In this work, we propose Ensemble-in-One (EIO), a simple but effective method to enlarge the ensemble within a random gated network (RGN). EIO augments the original model by replacing the parameterized layers with multi-path random gated blocks (RGBs) to construct an RGN. By diversifying the vulnerability of the numerous paths through the super-net, it provides high scalability because the paths within an RGN exponentially increase with the network depth. Our experiments demonstrate that EIO consistently outperforms previous ensemble training methods with even less computational overhead, simultaneously achieving better accuracy-robustness trade-offs than adversarial training.
3 Replies