Truth Table Deep Convolutional Neural Network, A New SAT-Encodable Architecture - Application To Complete RobustnessDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: AI Safety, SAT-encodable Neural Network, Formal Verification, Complete Verification Robustness, Interpretability, Logic Rules, XAI
Abstract: With the expanding role of neural networks, the need for formal verification of their behavior, interpretability and human post-processing has become critical in many applications. In 2018, it has been shown that Binary Neural Networks (BNNs) have an equivalent representation in boolean logic and can be formally analyzed using logical reasoning tools such as SAT or MaxSAT solvers. This formulation is powerful as it allows us to address a vast range of questions: existential, probabilistic, explanation generation, etc. However, to date, only BNNs can be transformed into a SAT formula and their strong binary constraints limit their natural accuracy. Moreover, the corresponding SAT conversion method intrinsically leads to formulas with a large number of variables and clauses, impeding interpretability as well as formal verification scalability. In this work, we introduce Truth Table Deep Convolutional Neural Networks (TT-DCNNs), a new family of SAT-encodable models featuring real-valued weights and real intermediate values as well as a highly interpretable conversion method. The TT-DCNN architecture enables for the first time all the logical classification rules to be extracted from a performant neural network which can be then easily interpreted by anyone familiar with the domain. Therefore, this allows integrating human knowledge in post-processing as well as enumerating all possible inputs/outputs prior to deployment in production. We believe our new architecture paves the way between eXplainability AI (XAI) and formal verification. First, we experimentally show that TT-DCNNs offer a better tradeoff between natural accuracy and formal verification than BNNs. Then, in the robustness verification setting, we demonstrate that TT-DCNNs outperform the verifiable accuracy of BNNs with a comparable computation time. Finally, we also drastically decrease the number of clauses and variables, enabling the usage of general SAT solvers and exact model counting solvers. Our developed real-valued network has general applications and we believe that its demonstrated robustness constitutes a suitable response to the rising demand for functional formal verification.
One-sentence Summary: We introduce a new family of real-weighted SAT-encodable models and we apply it to complete robustness verification.
14 Replies

Loading