Learning Robust XGBoost Ensembles for Regression Tasks

Published: 07 May 2025, Last Modified: 17 Jun 2025UAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Robustness, XGBoost, Decision Trees, Robust Training, Regression, Ensemble Models, Robust Splitting
TL;DR: The paper presents a novel method for training adversarially robust XGBoost ensembles applicable across various tasks.
Abstract:

Methods to improve the adversarial robustness of tree-based ensemble models for classification tasks have received significant attention in recent years. In this work, we propose a novel method for training robust tree-based boosted ensembles applicable to any task that employs a differentiable loss function, leveraging the XGBoost framework. Our work introduces an analytical solution to the upper-bound of the robust loss function, that can be computed in constant time, enabling the construction of robust splits without sacrificing computational efficiency. Although our method is general, we focus its application on regression tasks, extending conventional regression metrics to better quantify model robustness. An extensive evaluation on 19 regression datasets from a widely-used tabular data benchmark demonstrates that in the face of adversarial perturbations in the input space, our proposed method results in ensembles that are up to 44% more robust compared to the present SoA and 113% more robust than the conventional XGBoost model when considering norm bounded attacks of radius 0.05.

Latex Source Code: zip
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission557/Authors, auai.org/UAI/2025/Conference/Submission557/Reproducibility_Reviewers
Submission Number: 557
Loading