Towards neural networks that provably know when they don't knowDownload PDF

Published: 20 Dec 2019, Last Modified: 22 Oct 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
Abstract: It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don't know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance.
Code: [![github](/images/github_icon.svg) AlexMeinke/certified-certain-uncertainty](https://github.com/AlexMeinke/certified-certain-uncertainty)
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CIFAR-100](https://paperswithcode.com/dataset/cifar-100), [MNIST](https://paperswithcode.com/dataset/mnist), [SVHN](https://paperswithcode.com/dataset/svhn), [Tiny Images](https://paperswithcode.com/dataset/tiny-images)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1909.12180/code)
Original Pdf: pdf
8 Replies

Loading