everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Many state-of-the-art neural network verifiers for ReLU networks rely on Branch and Bound (BaB)-based methods. They branch ReLUs into positive (active) and negative (inactive) parts, and bound each subproblem independently. Since the cost of verification heavily depends on the number of subproblems, reducing the total number of branches is the key to verifying neural networks efficiently. In this paper, we consider \emph{bound implications} during branching - i.e., when one or more ReLU neurons are branched into the active (or inactive) case, they may imply that a set of other neurons from any layers become active or inactive, or have their bounds tightened. These implications can eliminate subproblems and improve bounds. We propose a scalable method to find implications among all neurons within tens of seconds even for large ResNets, by reusing pre-computed variables in popular bound-propagation-based verification methods such as $\alpha$-CROWN, and solving a cheap linear programming problem. Then, we build the bound implication graph (BIG) which connects neurons with bound implications, and it can be used by any BaB-based verifier to reduce the number of branching needed. When evaluated on a set of popular verification benchmarks and a new benchmark consisting of harder verification problems, BIG consistently reduces the verification time and verifies more problems than state-of-the-art verification tools.