Gradient Boosting Performs Gaussian Process InferenceDownload PDF

Published: 01 Feb 2023, Last Modified: 14 Oct 2024ICLR 2023 posterReaders: Everyone
Keywords: gradient boosting, gaussian process, knowledge uncertainty, kernel gradient boosting
TL;DR: We prove that gradient boosting converges to a Gaussian process' posterior mean and can be transformed into a sampler from the posterior, which leads to improved knowledge uncertainty estimates.
Abstract: This paper shows that gradient boosting based on symmetric decision trees can be equivalently reformulated as a kernel method that converges to the solution of a certain Kernel Ridge Regression problem. Thus, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance. We show that the proposed sampler allows for better knowledge uncertainty estimates leading to improved out-of-domain detection.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Probabilistic Methods (eg, variational inference, causal inference, Gaussian processes)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2206.05608/code)
13 Replies

Loading