Efficient Statistical Assessment of Neural Network Corruption RobustnessDownload PDF

21 May 2021, 20:50 (modified: 21 Jan 2022, 19:00)NeurIPS 2021 PosterReaders: Everyone
Keywords: deep learning, robustess, reliability, Monte Carlo
TL;DR: Using a sequential Monte Carlo algorithm we assess efficiently the reliability of neural networks.
Abstract: We quantify the robustness of a trained network to input uncertainties with a stochastic simulation inspired by the field of Statistical Reliability Engineering. The robustness assessment is cast as a statistical hypothesis test: the network is deemed as locally robust if the estimated probability of failure is lower than a critical level. The procedure is based on an Importance Splitting simulation generating samples of rare events. We derive theoretical guarantees that are non-asymptotic w.r.t. sample size. Experiments tackling large scale networks outline the efficiency of our method making a low number of calls to the network function.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/karimtito/efficient-statistical
14 Replies