Low-Precision Arithmetic for Fast Gaussian ProcessesDownload PDF

Published: 20 May 2022, Last Modified: 20 Oct 2024UAI 2022 PosterReaders: Everyone
Keywords: Gaussian processes, numerical linear algebra, half precision
TL;DR: We make conjugate gradients in half precision work for GPs.
Abstract: Low precision arithmetic has had a transformative effect on the training of neural networks, reducing computation, memory and energy requirements. However, despite their promise, low precision operations have received little attention for Gaussian process (GP) training, largely because GPs require sophisticated linear algebra routines that are unstable in low precision. We study the different failure modes that can occur when training GPs in half-precision. To circumvent these failure modes, we propose a multi-faceted approach involving conjugate gradients with re-orthogonalization, mixed precision, compact kernels, and preconditioners. Our approach significantly improves the numerical stability and practical performance of conjugate gradients in low precision over a wide range of settings, and reduces the runtime of $1.8$ million data points to $10$ hours on a single GPU, without requiring any sparse approximations.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/low-precision-arithmetic-for-fast-gaussian/code)
5 Replies

Loading