Keywords: stochastic differential equations, SGD dynamics, singular‑value spectra, Dyson Brownian motion, heavy‑tailed distributions
Abstract: Deep neural networks have revolutionized machine learning, yet their training dynamics remain theoretically unclear—we develop a continuous-time, matrix-valued stochastic differential equation (SDE) framework that rigorously connects the microscopic dynamics of SGD to the macroscopic evolution of singular-value spectra in weight matrices. We derive exact SDEs showing that squared singular values follow Dyson Brownian motion with eigenvalue repulsion, and characterize stationary distributions as gamma-type densities with power-law tails, providing the first theoretical explanation for the heavy-tailed "bulk+tail" spectral structure observed empirically in trained networks. Through controlled experiments on transformer and MLP architectures, we validate our theoretical predictions and demonstrate quantitative agreement between SDE-based forecasts and observed spectral evolution, providing a rigorous foundation for understanding why deep learning works.
Code: ipynb
Submission Number: 86
Loading