RED: Efficiently Boosting Ensemble Robustness via Random Sampling Inference

15 Sept 2024 (modified: 20 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Robustness, Ensemble Defences, Randomness, Hypernetworks, Computer Vision
Abstract: Despite the remarkable achievements of Deep Neural Networks (DNNs) in handling diverse tasks, these high-performing models remain susceptible to adversarial attacks. Considerable research has focused on bolstering the robustness of individual models and subsequently employing a simple ensemble defense strategy. However, existing ensemble techniques tend to increase the inference latency and the parameter number while achieving suboptimal robustness, which motivates us to reconsider the framework of model ensemble. To address the challenge of suboptimal robustness and inference latency, we introduce a novel ensemble defense approach called Random Ensemble Defense (RED). Specifically, we expedite inference via random sampling, which also makes it difficult for an attacker to attack a model ensemble. To effectively train a model ensemble, it is crucial to diversify the adversarial vulnerabilities among its members. This can be approached by reducing the adversarial transferability among them. To this end, we propose incorporating gradient similarity and Lipschitz regularizers into the training process. Moreover, to overcome the obstacle of a large number of parameters, we develop a parameter-lean version of RED (PS-RED). Extensive experiments, conducted across popular datasets, demonstrate that the proposed methods not only significantly improve ensemble robustness but also minimize inference delays and optimize storage usage for ensemble models. For example, our models enhance robust accuracy by approximately 15\% (RED) and save parameters by approximately 90\% (PS-RED) on CIFAR-10 compared with the most recent baselines.
Supplementary Material: pdf
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 924
Loading