Keywords: Out-of-Distribution Detection, Deep Learning, Feature Representation, Normalization, Model Robustness, Empirical Study, Representation Geometry
Abstract: Out-of-distribution (OOD) detection is critical for the reliable deployment and better understanding of deep learning models. To address this challenge, various methods relying on Mahalanobis distance were proposed and widely employed. However, the impact of representation geometry and feature normalization on the OOD performance of Mahalanobis-based methods is still not fully understood, which may limit their downstream application. To address this gap, we conducted a comprehensive empirical study across diverse image foundation models, datasets, and distance normalization schemes. First, our analysis shows that Mahalanobis-based methods aren't universally reliable. Second, we define the ideal geometry for data representations and demonstrate that spectral and intrinsic-dimensionality metrics can accurately predict a model's out-of-distribution (OOD) performance. Finally, we analyze how normalization impacts OOD performance. Building upon these studies, we propose a conformal generalization of recently proposed $\ell_2$ normalization that allows to control the degree of radial expansion of the representations geometry, which in turn helps improve OOD detection. By bridging the gap between representation geometry, normalization, and OOD performance, our findings offer new insights into the design of more effective and reliable deep learning models.
Primary Area: interpretability and explainable AI
Submission Number: 25544
Loading