TL;DR: We developed a method for jointly optimizing the out-of-distribution detection and generalization based on the neural collapse phenomenon
Abstract: Out-of-distribution (OOD) detection and OOD generalization are widely studied in Deep Neural Networks (DNNs), yet their relationship remains poorly understood. We empirically show that the degree of Neural Collapse (NC) in a network layer is inversely related with these objectives: stronger NC improves OOD detection but degrades generalization, while weaker NC enhances generalization at the cost of detection. This trade-off suggests that a single feature space cannot simultaneously achieve both tasks. To address this, we develop a theoretical framework linking NC to OOD detection and generalization. We show that entropy regularization mitigates NC to improve generalization, while a fixed Simplex ETF projector enforces NC for better detection. Based on these insights, we propose a method to control NC at different DNN layers. In experiments, our method excels at both tasks across OOD datasets and DNN architectures.
Lay Summary: Modern AI systems often struggle in unfamiliar situations — like a self-driving car encountering something it has never seen before. To make AI safer and more adaptable, it needs two key abilities: recognizing unfamiliar inputs (out-of-distribution detection) and learning from them (out-of-distribution generalization).
These two abilities are usually studied separately. But our research reveals they are linked by a hidden pattern in how neural networks organize information internally — a phenomenon known as Neural Collapse. We found that when this pattern is strong, AI becomes better at spotting the unfamiliar but worse at learning from it. When the pattern weakens, the opposite is true.
To address this trade-off, we designed an AI system that manages this internal pattern differently across its components. This allows one part to specialize in detection and another in learning, enabling the system to do both tasks effectively. Our approach moves us closer to building AI that can safely and reliably adapt to open-ended, ever-changing environments.
Link To Code: https://yousuf907.github.io/ncoodg
Primary Area: General Machine Learning->Representation Learning
Keywords: OOD Generalization, OOD Detection, Neural Collapse, Simplex ETF, Representation Learning, Transfer Learning
Submission Number: 1395
Loading