Unsupervised estimation of ensemble accuracy

Published: 27 Oct 2023, Last Modified: 14 Dec 2023InfoCog@NeurIPS2023 PosterEveryoneRevisionsBibTeX
Keywords: Ensemble learning, Unsupervised, Accuracy bounds
TL;DR: A practical accuracy bound for classifiers using unlabeled data and no optimization
Abstract: Ensemble learning combines several individual models to obtain a better generalization performance. In this work we present a practical method for estimating the joint power of several classifiers. It differs from existing approaches which focus on "diversity" measures by not relying on labels. This makes it both accurate and practical in the modern setting of unsupervised learning with huge datasets. The heart of the method is a combinatorial bound on the number of mistakes the ensemble is likely to make. The bound can be efficiently approximated in time linear in the number of samples. We relate the bound to actual misclassifications, hence its usefulness as a predictor of performance. We demonstrate the method on popular large-scale face recognition datasets which provide a useful playground for fine-grain classification tasks using noisy data over many classes.
Submission Number: 31
Loading