Keywords: Out-of-Distribution Detection, Model Robustness, Anomaly Detection, Conformal Prediction, Outlier Synthesis
TL;DR: We improve Out-of-Distribution detection by training neural networks with geometrically guided (PCA) and statistically bounded (Conformal Prediction) virtual outliers that better approximate the boundary of in-distribution data.
Abstract: Deep neural networks for image classification often exhibit overconfidence on out-of-distribution (OOD) samples. To address this, we introduce Geometric Conformal Outlier Synthesis (GCOS), a training-time regularization framework aimed at improving OOD robustness during inference. GCOS addresses a limitation of prior synthesis methods by generating virtual outliers in the hidden feature space that respect the learned manifold structure of in-distribution (ID) data. The synthesis proceeds in two stages: (i) PCA on training features identifies geometrically-informed, off-manifold directions; (ii) a Conformally-Inspired Shell, defined by the empirical quantiles of a nonconformity score from a calibration set, adaptively controls the synthesis magnitude to produce boundary samples. The shell ensures that generated outliers are neither trivially detectable nor indistinguishable from in-distribution data, facilitating smoother learning of robust features. This is combined with a contrastive regularization objective that promotes separability of ID and OOD samples in a chosen score space, such as Mahalanobis or energy-based. Experiments show that GCOS improves OOD detection relative to baselines using the standard energy-based inference approach. As an exploratory extension, the framework naturally transitions to conformal OOD inference, which translates uncertainty scores into statistically valid p-values and enables thresholds with formal error guarantees, providing a pathway toward more predictable and reliable OOD detection.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 16467
Loading