Generating and Checking DNN Verification Proofs

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: DNN Verification, Proof Generation, Proof Format
TL;DR: A new proof format (APTP) and an efficient proof checker (APTPchecker) make neural network verification more reliable and robust across different tools.
Abstract: Deep Neural Networks (DNN) have emerged as an effective approach to implementing challenging subproblems. They are increasingly being used as components in critical transportation, medical, and military systems. However, like human-written software, DNNs may have flaws that can lead to unsafe system performance. To confidently deploy DNNs in such systems, strong evidence is needed that they do not contain such flaws. This has led researchers to explore the adaptation and customization of software verification approaches to the problem of neural network verification (NNV). Many dozens of NNV tools have been developed in recent years and as a field these techniques have matured to the point where realistic networks can be analyzed to detect flaws and to prove conformance with specifications. NNV tools are highly-engineered and complex may harbor flaws that cause them to produce unsound results. We identify commonalities in algorithmic approaches taken by NNV tools to define a verifier independent proof format---activation pattern tree proofs (APTP)---and design an algorithm for checking those proofs that is proven correct and optimized to enable scalable checking. We demonstrate that existing verifiers can efficiently generate APTP proofs, and that an APTPchecker significantly outperforms prior work on a benchmark of 16 neural networks and 400 NNV problems, and that it is robust to variation in APTP proof structure arising from different NNV tools. APTPchecker is available at: https://github.com/dynaroars/APTPchecker.
Primary Area: Other (please use sparingly, only use the keyword field for more details)
Submission Number: 16449
Loading