- Keywords: Out-of-Distribution Detection, OOD detection, Outlier Exposure, Classification, Open-World Classification, Anomaly Detection, Novelty Detection, Calibration, Neural Networks
- TL;DR: We propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with Outlier Exposure both on image and text classiﬁcation tasks.
- Abstract: Deep neural networks have achieved great success in classiﬁcation tasks during the last years. However, one major problem to the path towards artiﬁcial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classiﬁcation algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efﬁciently detect out-of-distribution (OOD) examples without compromising much of its classiﬁcation accuracy on the test examples from known classes. Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classiﬁcation tasks. Additionally, the way this method was constructed makes it suitable for training any classiﬁcation algorithm that is based on Maximum Likelihood methods.
- Code: https://www.dropbox.com/sh/1czixfxns51t9gu/AAD45AvxBFWOHx8RUFxFADbJa?dl=0
- Original Pdf: pdf