SNOV: A Scalable Near-global Optimal Verifier for Neural Networks under Large Perturbations

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: neural network verification, global optimal, nonlinear programming, branch and bound, bound propagation, parallel computing, scalability, upper bounds, lower bounds, power flow, power grids
TL;DR: SNOV integrates nonlinear programming with convex relaxations in branch-and-bound, enabling scalable near-global optimal verification that unites efficiency and rigor for trustworthy AI in safety-critical domains.
Abstract: Neural networks achieve remarkable performance across domains, yet their deployment in safety-critical settings is limited by robustness concerns. Formal verification offers guarantees but faces a trade-off: complete verifiers scale poorly, while incomplete verifiers either yield loose lower bounds or miss counterexamples due to local optima. We propose a hybrid verifier within a branch-and-bound (BaB) framework that tightens bounds from both sides: an NLP-based upper bound (via complementarity constraints) rapidly rejects unsafe instances, while a relaxation-based lower bound (e.g., $\beta$-CROWN) certifies safe ones. When early stopping is not triggered, the procedure converges to an $\epsilon$-tight interval ($\underline{\ell},\bar{u}$) localizing the true optimum $f^\star$. To improve efficiency, we introduce warm-started NLP solves with low-rank KKT updates and a pattern-aligned strong branching strategy that accelerates lower-bound tightening. Experiments on MNIST and CIFAR-10 show that our method (i) produces substantially tighter upper bounds than PGD across perturbation radii, (ii) achieves per-node solves with polynomial-time complexity, and (iii) delivers large end-to-end speedups over MIP-based verification, further amplified by warm-starting, GPU batching, and pattern-aligned branching.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 18852
Loading