CLEVEREST: Accelerating CEGAR-based Neural Network Verification via Adversarial Attacks

Published: 01 Jan 2022, Last Modified: 28 Sept 2024SAS 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks (DNNs) have achieved remarkable performance in a myriad of complex tasks. However, lacking of robustness and black-box nature hinder their deployment in safety-critical systems. A large number of testing and formal verification techniques have been proposed recently, aiming to provide quality assurance for DNNs. Generally speaking, testing is a fast and simple way to disprove—but not to prove—certain properties of DNNs, while formal verification can provide correctness guarantees but often suffers from scalability and efficiency issues. In this work, we present a novel methodology, CLEVEREST, to accelerate formal verification of DNNs by synergistically combining testing and formal verification techniques based on the counterexample guided abstraction refinement (CEGAR) framework. We instantiate our methodology by leveraging CEGAR-NN, a CEGAR-based neural network verification method, and a representative adversarial attack method for testing. We conduct extensive experiments on the widely-used ACAS Xu DNN benchmark. The experimental results show that the testing can effectively reduce the usage of formal verification in the check-refine loop, hence significantly improves the efficiency.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview