Abstract: Stochastic Gradient Descent (SGD) often slows in the late stage of training due to anisotropic curvature and gradient noise.
We analyze preconditioned SGD in the geometry induced by a symmetric positive definite matrix $\mathbf{M}$,
deriving bounds in which both the convergence rate and the stochastic noise floor are governed by $\mathbf{M}$-dependent quantities:
the rate through an effective condition number in the $\mathbf{M}$-metric, and the floor through the {product} of that condition number and the preconditioned noise level.
For nonconvex objectives, we establish a preconditioner-dependent basin-stability guarantee:
when smoothness and basin size are measured in the $\mathbf{M}$-norm, the probability that the iterates remain in a well-behaved local region admits an explicit lower bound. This perspective is particularly relevant in Scientific Machine Learning (SciML), where achieving small training loss under stochastic updates is closely tied to physical fidelity, numerical stability, and constraint satisfaction.
The framework applies to both diagonal/adaptive and curvature-aware preconditioners and yields a simple design principle: choose $\mathbf{M}$ to improve local conditioning while attenuating noise. Experiments on a quadratic diagnostic and three SciML benchmarks validate the predicted rate–floor behavior.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Reza_Babanezhad_Harikandeh1
Submission Number: 6634
Loading