All-Around Neural Collapse for Imbalanced Classification

Published: 01 Jan 2025, Last Modified: 26 Sept 2025IEEE Trans. Knowl. Data Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural Collapse (NC) presents an elegant geometric structure that enables individual activations (features), class means and classifier (weights) vectors to reach optimal inter-class separability during the terminal phase of training on a balanced dataset. Once shifted to imbalanced classification, such an optimal structure of NC can be readily destroyed by the notorious minority collapse, where the classifier vectors corresponding to the minority classes are squeezed. In response, existing works mainly optimize classifiers in an effort to recover NC. However, we discover that this squeezing phenomenon is not only confined to classifier vectors but also occurs with class means. Consequently, reconstructing NC solely at the classifier aspect may be futile, as the class means remain compressed, leading to the violation of inherent self-duality in NC (i.e., class means and classifier vectors converge mutually) and incidentally, an unsatisfactory collapse of individual activations towards the corresponding class means. To shake off these dilemmas, we present a unified All-around Neural Collapse framework (AllNC), aiming to comprehensively restore NC across multiple aspects including individual activations, class means and classifier vectors. We thoroughly analyze its effectiveness and verify its performance on multiple benchmark datasets as state-of-the-art in both balanced and imbalanced settings.
Loading