Verifying and Interpreting Neural Networks Using Finite Automata

Published: 01 Jan 2024, Last Modified: 26 May 2025DLT 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Verifying properties and interpreting the behaviour of deep neural networks (DNN) is an important task given their ubiquitous use in applications, including safety-critical ones, and their black-box nature. We propose an automata-theoretic approach to tackling problems arising in DNN analysis. We show that the input-output behaviour of a DNN can be captured precisely by a (special) weak Büchi automaton and we show how these can be used to address common verification and interpretation tasks of DNN like adversarial robustness or minimum sufficient reasons.
Loading