Optimal scaling laws in learning hierarchical multi-index models

Published: 02 Mar 2026, Last Modified: 12 May 2026Sci4DL 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: scaling laws, multi-index models, neural networks, spectral method, gradient descent
Abstract: In this work, we provide a sharp theory of scaling laws for two-layer neural networks trained on a class of *hierarchical multi-index* targets, in a genuinely representation-limited regime. We derive exact information-theoretic scaling laws for subspace recovery and prediction error, revealing how the hierarchical features of the target are sequentially learned through a cascade of phase transitions. We further show that these optimal rates are achieved by a simple, target-agnostic spectral estimator, which can be interpreted as the small learning-rate limit of gradient descent on the first-layer weights. Once an adapted representation is identified, the readout can be learned statistically optimally, using an efficient procedure. As a consequence, we provide a unified and rigorous explanation of scaling laws, plateau phenomena, and spectral structure in shallow neural networks trained on such hierarchical targets.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Style Files: I have used the style files.
Submission Number: 107
Loading