AxlePro: Momentum-Accelerated Batched Training of Kernel Machines

Published: 22 Jan 2025, Last Modified: 09 Mar 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a new training algorithm for kernel methods, called AxlePro, based on momentum accelerated preconditioned SGD in the primal space.
Abstract: In this paper we derive a novel iterative algorithm for learning kernel machines. Our algorithm, $\textsf{AxlePro}$, extends the $\textsf{EigenPro}$ family of algorithms via momentum-based acceleration. $\textsf{AxlePro}$ can be applied to train kernel machines with arbitrary positive semidefinite kernels. We provide a convergence guarantee for the algorithm and demonstrate the speed-up of $\textsf{AxlePro}$ over competing algorithms via numerical experiments. Furthermore, we also derive a version of $\textsf{AxlePro}$ to train large kernel models over arbitrarily large datasets.
Submission Number: 568
Loading