Generalized Universal Approximation for Certified NetworksDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: adversarial deep learning, neural network verification, interval analysis
Abstract: To certify safety and robustness of neural networks, researchers have successfully applied abstract interpretation, primarily using interval bound propagation. To understand the power of interval bounds, we present the abstract universal approximation (AUA) theorem, a generalization of the recent result by Baader et al. (2020) for ReLU networks to a large class of neural networks. The AUA theorem states that for any continuous function $f$, there exists a neural network that (1) approximates $f$ (universal approximation) and (2) whose interval bounds are an arbitrarily close approximation of the set semantics of $f$. The network may be constructed using any activation function from a rich class of functions---sigmoid, tanh, ReLU, ELU, etc.---making our result quite general. The key implication of the AUA theorem is that there always exists certifiably robust neural networks, which can be constructed using a wide range of activation functions.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We have extended the universal approximation theorem for certified neural networks and demonstrated how interval analysis can be used to certify the robustness of neural networks.
Reviewed Version (pdf): https://openreview.net/references/pdf?id=ITakkwrn5K
10 Replies

Loading