SDP-CROWN: Efficient Bound Propagation for Neural Network Verification with Tightness of Semidefinite Programming

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Neural network verifiers based on linear bound propagation scale impressively to massive models but can be surprisingly loose when neuron coupling is crucial. Conversely, semidefinite programming (SDP) verifiers capture inter-neuron coupling naturally, but their cubic complexity restricts them to only small models. In this paper, we propose SDP-CROWN, a novel hybrid verification framework that combines the tightness of SDP relaxations with the scalability of bound-propagation verifiers. At the core of SDP-CROWN is a new linear bound---derived via SDP principles---that explicitly captures $\ell_{2}$-norm-based inter-neuron coupling while adding only one extra parameter per layer. This bound can be integrated seamlessly into any linear bound-propagation pipeline, preserving the inherent scalability of such methods yet significantly improving tightness. In theory, we prove that our inter-neuron bound can be up to a factor of $\sqrt{n}$ tighter than traditional per-neuron bounds. In practice, when incorporated into the state-of-the-art $\alpha$-CROWN verifier, we observe markedly improved verification performance on large models with up to 65 thousand neurons and 2.47 million parameters, achieving tightness that approaches that of costly SDP-based methods.
Lay Summary: Verifying that neural networks behave reliably, especially when faced with small and unexpected changes to their inputs, is a key challenge in making AI systems safe. Some fast methods can check large networks but miss important internal relationships between neurons, leading to overly cautious results. Other more accurate methods, like those based on semidefinite programming (SDP), capture these relationships well but are too slow for big networks. In this work, we introduce SDP-CROWN, a new technique that combines the best of both worlds. It keeps the speed of scalable methods while adding a way to model how neurons influence each other without the heavy computation of traditional SDP. Our approach can be integrated into existing tools and is both theoretically stronger and practically more accurate. In tests on large neural networks, it significantly improves verification quality while remaining efficient.
Primary Area: Deep Learning->Robustness
Keywords: neural network verification, convex optimization, semidefinite programming
Submission Number: 11249
Loading