DeepGRE: Global Robustness Evaluation of Deep Neural Networks

Published: 01 Jan 2024, Last Modified: 29 Sept 2024ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Robustness measurements on deep neural networks (DNNs) have gained significant attention, especially in safety-critical applications. Numerous studies have been devoted to assessing the robustness of classifiers by averaging local robustness over a fixed set of data samples, such as a test set. However, the local statistics may not provide an accurate representation of the actual global robustness over the entire underlying unknown data distribution. To address this challenge, this paper proposes a novel framework, namely DeepGRE, for global robustness estimates of adversarial perturbation in combination with generative models and existing local robustness evaluation methods. Besides, DeepGRE employs Quasi-Monte Carlo approach to produce estimates of global robustness with low variance, making the assessments more reliable and statistically sound, since randomness is introduced by all samples drawn from a generative model. From a theoretical perspective, this work naturally provides an upper bound between true global robustness and estimated global robustness based on Lipschitz continuity. Also, it derives a statistical guarantee on the difference between true and empirical estimates for sample complexity. Our code is available at https://github.com/TrustAI/DeepGRE.
Loading