On Margins and Generalisation for Voting ClassifiersDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 11 Jan 2023, 13:33NeurIPS 2022 AcceptReaders: Everyone
Keywords: PAC-Bayes, Generalisation bounds, Ensemble learning, Margins, Majority votes, Aggregation of experts
TL;DR: A new margin bound for majority voting of weighted ensembles provides consistently tight empirical generalisation guarantees on real tasks.
Abstract: We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory. These provide state-of-the-art guarantees on a number of classification tasks. Our central results leverage the Dirichlet posteriors studied recently by Zantedeschi et al. (2021) for training voting classifiers; in contrast to that work our bounds apply to non-randomised votes via the use of margins. Our contributions add perspective to the debate on the ``margins theory'' proposed by Schapire et al. (1998) for the generalisation of ensemble classifiers.
Supplementary Material: pdf
10 Replies