KOALA++: Efficient Kalman-Based Optimization with Gradient-Covariance Products

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Optimization, Kalman Filter, Deep Learning
TL;DR: A quasi-second order optimizer with first-order efficiency by using Kalman Filter
Abstract: We propose KOALA++, a scalable Kalman-based optimization algorithm that explicitly models structured gradient uncertainty in neural network training. Unlike second-order methods, which rely on expensive second order gradient calculation, our method directly estimates the parameter covariance matrix by recursively updating compact gradient covariance products. This design improves upon the original KOALA framework that assumed diagonal covariance by implicitly capturing richer uncertainty structure without storing the full covariance matrix and avoiding large matrix inversions. Across diverse tasks, including image classification and language modeling, KOALA++ achieves accuracy on par or better than state-of-the-art second-order optimizers while maintaining the efficiency of first-order methods.
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 24013
Loading