Keywords: neural net verification, semidefinite optimization, interior-point methd
Abstract: Semidefinite programming (SDP) relaxation has emerged as a promising approach for neural network verification, offering tighter bounds than other convex relaxation methods for deep neural networks (DNNs) with ReLU activations. However, we identify a critical limitation in the SDP relaxation when applied to deep networks: a phenomenon we term interior-point vanishing, which leads to the loss of strict feasibility -- a crucial condition for the numerical stability and optimality of SDP.
Through rigorous theoretical and empirical analysis, we demonstrate that interior-point vanishing creates a fundamental barrier to scaling SDP-based verification methods. Specifically, strict feasibility diminishes as the depth of DNNs increases. To address this issue, we design and investigate five solutions to enhance the feasibility conditions of the verification problem. Our methods successfully solve 88\% of the problems that could not be solved by existing methods, accounting for 41\% of the total. Our analysis also reveals that the valid constraints for the lower and upper bounds for each ReLU unit are traditionally inherited from prior work without rigorous justification. We find that these constraints are not only unbeneficial but, in fact, detrimental to the problem's feasibility.
Primary Area: optimization
Submission Number: 3888
Loading