Keywords: RBF network, OOD detection, overconfident neural networks
Abstract: Neural networks are popular and useful in many fields, but they have the problem of giving high confidence responses for examples that are away from the training data. This results in the neural networks being very confident while making gross mistakes, thus limiting their reliability for safety critical applications such as autonomous driving, space exploration, etc.
In this paper, we present a more generic neuron formulation that contains the standard dot-product based neuron and the RBF neuron as two extreme cases of a shape parameter. Using ReLU as the activation function we obtain a novel neuron that compact support, which means its output is zero outside a bounded domain. We also show how to avoid difficulties in training a neural network with such neurons, by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value.
Through experiments on standard benchmark datasets, we show the promise of the proposed approach, in that it can have good prediction on in-distribution samples, and it can consistently detect and have low confidence on out of distribution samples.
One-sentence Summary: A neural network that returns all 0 for examples far away from the training data, thus not being overconfident on out of distribution samples.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=xjfbGelkx
12 Replies
Loading