Abstract: The robustness of deep neural networks has received extensive attention and is considered to need guarantees by formal verification. For ReLU neural network verification, there are abundant studies and various techniques. However, verifying sigmoid-like neural networks still relies on linear approximation, which inevitably introduces errors and leads to imprecise results. To reduce error and get better results, we present a branch and bound framework for sigmoid-like neural network verification in this paper. In this framework, we design a neuron splitting method and a branching strategy. The splitting method can split neurons with non-linear sigmoid-like activation functions, and the branching strategy reduces the size of the branch and bound tree, which improves the verification performance. We implement our verification framework as SigBaB and evaluate its performance on open source benchmarks. Experiment results show that our method can produce more precise verification results than other state-of-the-art methods and our branching strategy shows superior performance compared to other strategies.
Loading