Improving Confident-Classifiers For Out-of-distribution DetectionDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: It is a classifier based Out-of-distribution detection method
Abstract: Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called “confident-classifier” by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KLdivergence between the predictive distribution of OOD samples in the low-density“boundary” of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this paper, we analyze this setting both theoretically and experimentally. We also propose a novel algorithm to generate the“boundary” OOD samples to train a classifier with an explicit “reject” class for OOD samples. We compare our approach against several recent classifier-based OOD detectors including the confident-classifiers on MNIST and Fashion-MNISTdatasets. Overall the proposed approach consistently performs better than others across most of the experiments.
Keywords: Out-of-distribution detection, Manifold, Nullspace, Variational Auto-encoder, GAN, Confident-classifier
Code: https://github.com/iclr2020-ai/ICLR2020
Original Pdf: pdf
7 Replies

Loading