Rolling Ball Optimizer: Learning by ironing out loss landscape wrinkles

ICLR 2026 Conference Submission16990 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Loss landscape, DeepLearning, Optimization
Abstract: Training large neural networks (NNs) requires optimizing high-dimensional data-dependent loss functions. The optimization landscape of these functions is often highly complex and textured, even fractal-like, with many spurious (sometimes sharp) local minima, ill-conditioned valleys, degenerate points, and saddle points. Complicating things further is the fact that these landscape characteristics are a function of the training data, meaning that noise in the training data can propagate forward and give rise to unrepresentative small-scale geometry. This poses a difficulty for gradient-based optimization methods, which rely on local geometry to compute their updates and are, therefore, vulnerable to being derailed by noisy data. In practice, this translates to a strong dependence of the optimization dynamics on the noise in the data, i.e., poor generalization performance. To remediate this problem, we propose a new optimization procedure: Rolling Ball Optimizer (RBO), that breaks this spatial locality by explicitly incorporating information from a larger region of the loss landscape in its updates. We achieve this by simulating the motion of a rigid sphere of finite radius $\rho>0$ rolling on the loss landscape, a straightforward generalization of Gradient Descent (GD) that simplifies into it in the *infinitesimal* limit $(\rho\to0)$. The radius serves as a hyperparameter that determines the scale at which RBO "sees" the loss landscape, allowing control over the granularity of its interaction therewith. We are motivated in this work by the intuition that the large-scale geometry of the loss landscape is less data-specific than its fine-grained structure, and that it is easier to optimize. We support this intuition by proving that our algorithm has a smoothing effect on the loss function. Evaluation against SGD, SAM, and Entropy-SGD, on MNIST and CIFAR-10/100 demonstrates promising results in terms of convergence speed, training accuracy, and generalization performance.
Primary Area: optimization
Submission Number: 16990
Loading