Keywords: neural collapse, ResNets, transformers, LayerNorm, weight regularization
TL;DR: We prove that neural collapse is approximately optimal in deep regularized ResNets and transformers end-to-end.
Abstract: The empirical emergence of neural collapse---a surprising symmetry in the feature representations of the training data in the penultimate layer of deep neural networks---has spurred a line of theoretical research aimed at its understanding. However, existing work either focuses on data-agnostic models or it remains limited to multi-layer perceptrons. We fill both these gaps by analyzing modern architectures in a data-aware regime: we prove that global optima of deep regularized transformers and residual networks (ResNets) with LayerNorm trained with cross entropy or mean squared error loss are approximately collapsed, and the approximation gets tighter as the depth grows. More generally, we formally reduce any end-to-end large-depth ResNet or transformer training into an equivalent unconstrained features model, thus justifying its wide use in the literature even beyond data-agnostic settings. Our theoretical results are supported by experiments on computer vision and language datasets showing that, as the depth grows, neural collapse indeed becomes more prominent.
Student Paper: Yes
Submission Number: 24
Loading