Abstract: Stochastic Gradient Boosting (SGB) is a widely used approach to regularization of boosting models based on decision trees. It was shown that, in many cases, random sampling at each iteration can lead to better generalization performance of the model and can also decrease the learning time. Different sampling approaches were proposed, where probabilities are not uniform, and it is not currently clear which approach is the most effective. In this paper, we formulate the problem of randomization in SGB in terms of optimization of sampling probabilities with the objective to maximize the estimation accuracy of split scoring used to train decision trees. This optimization problem has a closed-form nearly optimal solution, and it leads to a new sampling technique, which we call Minimal Variance Sampling (MVS). Our method both decreases the number of examples needed for each iteration of boosting, and increases the quality of the model significantly as compared to the state-of-the art sampling methods.
Code Link: https://github.com/ibr11/LightGBM/blob/master/src/boosting/mvs.hpp, https://github.com/ibr11/catboost/blob/master/catboost/libs/algo/mvs.cpp
CMT Num: 8632
0 Replies
Loading