Convergence of Stochastic Gradient Langevin Dynamics in the Lazy Training Regime

TMLR Paper6291 Authors

23 Oct 2025 (modified: 03 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Continuous-time models provide important insights into the training dynamics of optimization algorithms in deep learning. In this work, we establish a non-asymptotic convergence analysis of stochastic gradient Langevin dynamics (SGLD), which is an Itô stochastic differential equation (SDE) approximation of stochastic gradient descent in continuous time, in the lazy training regime. We show that, under regularity conditions on the Hessian of the loss function, SGLD with multiplicative and state-dependent noise (i) yields a non-degenerate kernel throughout the training process with high probability, and (ii) achieves exponential convergence to the empirical risk minimizer in expectation, and we establish finite-time and finite-width bounds on the optimality gap. We corroborate our theoretical findings with numerical examples in the regression setting.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Proof of Theorem 2 was revised with a modified assumption (Assumption 2). We proved that the modified Assumption 2 automatically holds for shallow and deep neural networks in Proposition 3 and Lemma 3, respectively. We provided a detailed comparison with prior work in Appendix A to address the reviewers' suggestion. We also modified Corollary 4 to address the reviewers' concerns about the well-posedness of the SDE.
Assigned Action Editor: ~Alain_Durmus1
Submission Number: 6291
Loading