Abstract: We derive exact upper and lower bounds for the cumulative distribution function (cdf) of the output of a neural network (NN) over its entire support subject to noisy (stochastic) inputs. The upper and lower bounds converge to the true cdf over its domain as the resolution increases. Our method applies to any feedforward NN using continuous monotonic piecewise twice continuously differentiable activation functions (e.g., ReLU, tanh and softmax) and convolutional NNs, which were beyond the scope of competing approaches. The novelty and instrumental tool of our approach is to bound general NNs with ReLU NNs. The ReLU NN-based bounds are then used to derive the upper and lower bounds of the cdf of the NN output. Experiments demonstrate that our method delivers guaranteed bounds of the predictive output distribution over its support, thus providing exact error guarantees, in contrast to competing approaches.
Lay Summary: We developed a way to compute statistical bounds within which a neural network’s output falls when the inputs are uncertain or noisy.
Our method works for many types of neural networks, including standard feedforward networks and convolutional networks, and it supports popular activation functions like ReLU, tanh, and softmax. This makes our approach more widely applicable than others currently available.
A novel feature of our method is that we use simpler ReLU-based networks to approximate more complex networks. This allows us to reliably calculate upper and lower bounds for how the complex network behaves under uncertainty.
Our method gives guaranteed and precise estimates of the network’s output behavior—something other approaches can’t fully provide.
Primary Area: Theory->Probabilistic Methods
Keywords: Neural Networks, Uncertainty Propagation, Predictive Output Distribution, Guaranteed Bounds
Submission Number: 11190
Loading