Detecting Out-Of-Distribution Samples Using Low-Order Deep Features StatisticsDownload PDF

Sep 27, 2018 (edited Dec 21, 2018)ICLR 2019 Conference Blind SubmissionReaders: Everyone
  • Abstract: The ability to detect when an input sample was not drawn from the training distribution is an important desirable property of deep neural networks. In this paper, we show that a simple ensembling of first and second order deep feature statistics can be exploited to effectively differentiate in-distribution and out-of-distribution samples. Specifically, we observe that the mean and standard deviation within feature maps differs greatly between in-distribution and out-of-distribution samples. Based on this observation, we propose a simple and efficient plug-and-play detection procedure that does not require re-training, pre-processing or changes to the model. The proposed method outperforms the state-of-the-art by a large margin in all standard benchmarking tasks, while being much simpler to implement and execute. Notably, our method improves the true negative rate from 39.6% to 95.3% when 95% of in-distribution (CIFAR-100) are correctly detected using a DenseNet and the out-of-distribution dataset is TinyImageNet resize. The source code of our method will be made publicly available.
  • Keywords: computer vision, out-of-distribution detection, image classification
  • TL;DR: Detecting out-of-distribution samples by using low-order feature statistics without requiring any change in underlying DNN.
24 Replies