Neural Collapse Beyond the Unconstrained Features Model: Landscape, Dynamics, and Generalization in the Mean-Field Regime

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We prove that NC1 (vanishing within-class variability) holds when training a class of 3-layer networks via gradient flow, due to loss landscape properties; we further show co-occurrence of NC1 and small test error for certain data distributions.
Abstract: Neural Collapse is a phenomenon where the last-layer representations of a well-trained neural network converge to a highly structured geometry. In this paper, we focus on its first (and most basic) property, known as NC1: the within-class variability vanishes. While prior theoretical studies establish the occurrence of NC1 via the data-agnostic unconstrained features model, our work adopts a data-specific perspective, analyzing NC1 in a three-layer neural network, with the first two layers operating in the mean-field regime and followed by a linear layer. In particular, we establish a fundamental connection between NC1 and the loss landscape: we prove that points with small empirical loss and gradient norm (thus, close to being stationary) approximately satisfy NC1, and the closeness to NC1 is controlled by the residual loss and gradient norm. We then show that (i) gradient flow on the mean squared error converges to NC1 solutions with small empirical loss, and (ii) for well-separated data distributions, both NC1 and vanishing test loss are achieved simultaneously. This aligns with the empirical observation that NC1 emerges during training while models attain near-zero test error. Overall, our results demonstrate that NC1 arises from gradient training due to the properties of the loss landscape, and they show the co-occurrence of NC1 and small test error for certain data distributions.
Lay Summary: Neural collapse is the empirical phenomenon in which the last-layer representations of a well-trained neural network converge to a highly structured geometry. The previous theoretical understanding of this phenomenon is limited to models that do not consider the impact of data, or models that are partially in the NTK regime (which means no feature learning) during training. In this work, we prove the occurrence of the within-class variability collapse (also referred to as NC1) for a three-layer neural network in the mean-field regime (which has feature learning). We also prove the co-occurrence of NC1 and vanishing test loss on well-separated data, which aligns with the empirical observation that NC1 emerges during training while models attain near-zero test error.
Link To Code: https://github.com/DiyuanWu/icml25_expr
Primary Area: Deep Learning->Theory
Keywords: neural collapse, mean-field analysis, gradient flow, generalization error, loss landscape
Submission Number: 3886
Loading