Optimal Weak to Strong LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 05 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: boosting, weak learning, sample complexity, lower bound
TL;DR: We give an algorithm that turns a weak learner into a strong learner while using the minimum possible amount of training samples
Abstract: The classic algorithm AdaBoost allows to convert a weak learner, that is an algorithm that produces a hypothesis which is slightly better than chance, into a strong learner, achieving arbitrarily high accuracy when given enough training data. We present a new algorithm that constructs a strong learner from a weak learner, but uses less training data than AdaBoost and all other weak to strong learners to achieve the same generalization bounds. A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal. Hence, this work settles the sample complexity of the classic problem of constructing a strong learner from a weak learner.
Supplementary Material: pdf
9 Replies