Uncertainty in Gradient Boosting via EnsemblesDownload PDF

28 Sept 2020, 15:49 (modified: 23 Mar 2021, 21:26)ICLR 2021 PosterReaders: Everyone
Keywords: uncertainty, ensembles, gradient boosting, decision trees, knowledge uncertainty
Abstract: For many practical, high-risk applications, it is essential to quantify uncertainty in a model's predictions to avoid costly mistakes. While predictive uncertainty is widely studied for neural networks, the topic seems to be under-explored for models based on gradient boosting. However, gradient boosting often achieves state-of-the-art results on tabular data. This work examines a probabilistic ensemble-based framework for deriving uncertainty estimates in the predictions of gradient boosting classification and regression models. We conducted experiments on a range of synthetic and real datasets and investigated the applicability of ensemble approaches to gradient boosting models that are themselves ensembles of decision trees. Our analysis shows that ensembles of gradient boosting models successfully detect anomalous inputs while having limited ability to improve the predicted total uncertainty. Importantly, we also propose a concept of a virtual ensemble to get the benefits of an ensemble via only one gradient boosting model, which significantly reduces complexity.
One-sentence Summary: Propose and analyze an ensemble-based framework for deriving uncertainty estimates in GBDT models.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
14 Replies

Loading