Position: Supervised Classifiers Answer the Wrong Questions for OOD Detection

Published: 05 May 2025, Last Modified: 18 Jun 2025ICML 2025 Position Paper Track posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: OOD detection methods which rely on the features or logits of supervised models trained on in-distribution data have fundamental pathologies.
Abstract: To detect distribution shifts and improve model safety, many out-of-distribution (OOD) detection methods rely on the predictive uncertainty or features of supervised models trained on in-distribution data. In this position paper, we critically re-examine this popular family of OOD detection procedures, and we argue that these methods are fundamentally answering the wrong questions for OOD detection. There is no simple fix to this misalignment, since a classifier trained only on in-distribution classes cannot be expected to identify OOD points; for instance, a cat-dog classifier may confidently misclassify an airplane if it contains features that distinguish cats from dogs, despite generally appearing nothing alike. We find that uncertainty-based methods incorrectly conflate high uncertainty with being OOD, while feature-based methods incorrectly conflate far feature-space distance with being OOD. We show how these pathologies manifest as irreducible errors in OOD detection and identify common settings where these methods are ineffective. Additionally, interventions to improve OOD detection such as feature-logit hybrid methods, scaling of model and data size, epistemic uncertainty representation, and outlier exposure also fail to address this fundamental misalignment in objectives.
Lay Summary: To improve model safety, there have been many methods which aim to detect whether a new input was drawn from a different distribution compared to the inputs that the model has seen during training. These methods often rely on the features or the uncertainties of a model. In this paper, we argue that the methods which only rely on the original model's features and uncertainties are not able to accurately detect whether the input is from a new distribution. We show that current methods wrongly assume that models have high confidence on inputs drawn from the same distribution and have low confidence on inputs drawn from different distributions; however, a model trained to distinguish cats from dogs may confidently mislabel an airplane as a dog if the airplane and dog have a few shared traits, despite generally appearing nothing alike. Current methods also assume that only the features of the inputs drawn from different distributions will be far from the features of the inputs that the model has seen during training, which is also incorrect. In this paper, we identify common settings where these methods are ineffective, and we also show how many popular methods also fail to address these pathologies.
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: OOD detection, OOD generalization
Submission Number: 320
Loading