Abstract: Deep learning models often exhibit overconfidence in predicting out-of-distribution (OOD) data, underscoring the crucial role of OOD detection in ensuring reliability in predictions. Among various OOD detection approaches, post-hoc detectors have gained significant popularity, primarily due to their ease of implementation and competitive performance. However, recent benchmarks for OOD detection have revealed a lack of consistency in existing post-hoc methods. This inconsistency in post-hoc detectors can be attributed to their sole reliance either on extreme information, such as the maximum logit, or on collective information (i.e., information spanned across classes or training samples) embedded within the output layer. In this paper, we propose ExCeL, which combines both extreme and collective information within the output layer for enhanced and consistent performance in OOD detection. We leverage the logit of the top predicted class as the extreme information (i.e., the maximum logit), while the collective information is derived in a novel approach that involves assessing the probability of other classes appearing in subsequent ranks across various training samples. Our idea is motivated by the observation that, for in-distribution (ID) data, the ranking of classes beyond the predicted class is more deterministic compared to that in OOD data. Experiments conducted on CIFAR100, ImageNet-200, and ImageNet-1K datasets demonstrate that ExCeL consistently is among the five top-performing methods out of twenty-one existing post-hoc baselines when the joint performance on near-OOD and far-OOD is considered (i.e., in terms of AUROC and FPR95). Furthermore, ExCeL shows the best overall performance across all datasets, unlike other baselines that work best on one dataset but have a performance drop in others.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yanwei_Fu2
Submission Number: 3293
Loading