Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks

Published: 01 May 2025, Last Modified: 06 Aug 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Loss curves from compute-optimally trained models collapse onto a universal shape, from which we can derive both theoretical insights and practical diagnostics for scaling.
Abstract: What scaling limits govern neural network training dynamics when model size and training time grow in tandem? We show that despite the complex interactions between architecture, training algorithms, and data, compute-optimally trained models exhibit a remarkably precise universality. Specifically, loss curves from models of varying sizes collapse onto a single universal curve when training compute and loss are normalized to unity at the end of training. With learning rate decay, the collapse becomes so tight that differences in the normalized curves across models fall below the noise floor of individual loss curves across random seeds, a phenomenon we term supercollapse. We observe supercollapse across learning rate schedules, datasets, and architectures, including transformers trained on next-token prediction, and find it breaks down when hyperparameters are scaled suboptimally, providing a precise and practical indicator of good scaling. We explain these phenomena by connecting collapse to the power-law structure in typical neural scaling laws, and analyzing a simple yet surprisingly effective model of SGD noise dynamics that accurately predicts loss curves across various learning rate schedules and quantitatively explains the origin of supercollapse.
Lay Summary: We find the loss curves of neural networks follow nearly identical shapes as they scale up in model size and training duration. We find evidence that this surprising phenomenon reveals valuable diagnostic information of neural network training dynamics at scale, and we provide some theoretical explanation of the mechanisms behind it.
Link To Code: https://github.com/shikaiqiu/supercollapse
Primary Area: Deep Learning
Keywords: Scaling Laws, Optimization
Submission Number: 13037
Loading