Stabilizing Gradient Descent via Second-Order Control-Theoretic Dynamics

20 Sept 2025 (modified: 22 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gradient Descent, Control Theory, Stability Analysis
Abstract: In this paper, we establish a fundamental connection between the stability of gradient descent dynamics and the curvature of the underlying loss landscape from a continuous-time perspective. We show that the sign of the real parts of the Hessian’s eigenvalues directly governs the convergence behavior of gradient-based optimization. Through analytically tractable, low-dimensional toy examples, we demonstrate that gradient descent can diverge even in simple convex settings. To address this issue, we formulate gradient descent as a second-order dynamical system and introduce a controller that guarantees locally asymptotic stability by regulating the system’s eigen-structure. Notably, we show that the proposed controller admits a variational interpretation and can be realized as a gradient guidance term augmenting the original gradient. Empirical results on numerical examples with various curvatures and learning rate validate our theoretical findings and demonstrate the proposed method improves both stability and convergence behaviors.
Primary Area: optimization
Submission Number: 23045
Loading