Bound Tightening Using Rolling-Horizon Decomposition for Neural Network Verification

Published: 01 Jan 2024, Last Modified: 04 Oct 2024CPAIOR (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural network verification aims at providing formal guarantees on the output of trained neural networks, to ensure their robustness against adversarial examples and enable deployment in safety-critical applications. This paper introduces a new approach to neural network verification using a novel mixed-integer programming (MIP) rolling-horizon decomposition method. The algorithm leverages the layered structure of neural networks by employing optimization-based bound tightening (OBBT) on smaller sub-graphs of the original network in a rolling-horizon fashion and tightening the bounds in parallel. This strategy strikes a balance between achieving tighter bounds and ensuring the tractability of the underlying mixed-integer programs. Extensive numerical experiments, conducted on instances from the VNN-COMP benchmark library, demonstrate that the proposed approach yields significantly improved bounds compared to existing efficient bound propagation methods. Notably, the proposed method proves effective in solving open verification problems. Our code is built and released as part of the open-source mathematical modeling tool Gravity (https://github.com/coin-or/Gravity), which is extended to support generic neural network models.
Loading