Abstract: The vulnerability of deep neural networks (DNNs) to input perturbations has posed a significant challenge. Recent work on robustness verification of DNNs not only lacks scalability but also requires severe restrictions on the architecture (layers, activation functions, etc.). To address these limitations, we propose a novel framework, SORA, for scalable blackbox reachability analysis of DNNs. SORA can work on a broad class of neural network structures, including those networks with very deep layers and a huge number of neurons with nonlinear activation functions. Based on the Lipschitz continuity, SORA verifies the reachability property of DNNs with a novel optimisation algorithm and has global convergence guarantee. Our method does not require access to the inner structures of the DNNs, hence a black-box method. Experimental results show that, compared to existing verification methods, SORA shows superior performance in terms of both efficiency and scalability, especially when handling a deep neural network with very deep layers and a large number of neurons with various types of nonlinear activation functions.
Loading