Keywords: Loss reweighting, Early stopping, Class imbalance, Learning dynamics, Unconstrained features model (UFM)
TL;DR: Reweighting helps imbalanced classification not by changing the final solution, but by equalizing feature learning speeds early in training—an effect we explain using a small-scale model.
Abstract: The application of loss reweighting in modern deep learning presents a nuanced picture. While it fails to alter the terminal learning phase in overparameterized deep neural networks (DNNs) trained on high-dimensional datasets, empirical evidence consistently shows it offers significant benefits early in training. To transparently demonstrate and analyze this phenomenon, we introduce a small-scale model (SSM). This model is specifically designed to abstract the inherent complexities of both the DNN architecture and the input data, while maintaining key information about the structure of imbalance within its spectral components. On the one hand, the SSM reveals how vanilla empirical risk minimization preferentially learns to distinguish majority classes over minorities early in training, consequently delaying minority learning. In stark contrast, reweighting restores balanced learning dynamics, enabling the simultaneous learning of features associated with both majorities and minorities.
Code: ipynb
Submission Number: 66
Loading