Fitting Feature Norm to Confidence: A Regularization Approach for Robust Out-of-Distribution Detection
Keywords: Out-of-Distribution Detection, Confidence calibration
Abstract: We propose a novel framework for robust out-of-distribution (OOD) detection by explicitly designing the feature space. Our approach aligns feature norm with model confidence by enforcing a zero-confidence baseline—defined as the feature norm at which the softmax output is uniform—and deriving an upper bound on the feature norm through softmax sensitivity analysis. This strategy enables in-domain samples to exhibit high confidence while ensuring that OOD samples, which naturally possess lower feature norms, yield near-uniform predictions. Unlike existing methods that simply modify the feature norm without optimizing the underlying embedding space, our method learns an optimal feature space via density ratio estimation using Kernel Logistic Regression and feature space augmentation. Our theoretical analysis shows that the risk difference between the true data distribution (comprising both known and unknown samples) and an auxiliary domain—constructed from augmented OOD samples drawn from the inner region of the feature space—is bounded. Extensive experiments show that our approach significantly enhances OOD detection performance.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 11159
Loading