Keywords: ensemble method, subsampling, heavy tail, exponential convergence, excess risk, generalization
TL;DR: Our paper shows that ensemble learning via majority voting achieves exponentially faster risk decay, improving base learners with slow rates.
Abstract: Ensemble learning is a popular technique to improve the accuracy of machine learning models. It traditionally hinges on the rationale that aggregating multiple weak models can lead to better models with lower variance and hence higher stability, especially for discontinuous base learners. In this paper, we provide a new perspective on ensembling. By selecting the most frequently generated model from the base learner when repeatedly applied to subsamples, we can attain exponentially decaying tails for the excess risk, even if the base learner suffers from slow (i.e., polynomial) decay rates. This tail enhancement power of ensembling applies to base learners that have reasonable predictive power to begin with and is stronger than variance reduction in the sense of exhibiting rate improvement. We demonstrate how our ensemble methods can substantially improve out-of-sample performances in a range of numerical examples involving heavy-tailed data or intrinsically slow rates.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 18067
Loading