Abstract: We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping. We provide theoretical and numerical results on the inverse of ReLU-layers. First, we derive a necessary and sufficient condition on the existence of invariance that provides a geometric interpretation. Next, we move to robustness via analyzing local effects on the inverse. To conclude, we show how this reverse point of view not only provides insights into key effects, but also enables to view adversarial examples from different perspectives.
Keywords: deep neural networks, invertibility, invariance, robustness, ReLU networks
TL;DR: We analyze the invertibility of deep neural networks by studying preimages of ReLU-layers and the stability of the inverse.
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10)
12 Replies
Loading