Weak to Strong Learning from Aggregate Labels

Published: 10 Oct 2024, Last Modified: 07 Dec 2024NeurIPS 2024 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLP, MIL, Boosting, Impossibility, Algorithms
TL;DR: Impossibility of boosting in LLP and MIL, while weak classifier on large bags yields strong classifier on small bags in LLP, and experiments.
Abstract: In learning from aggregate labels, the training data consists of sets or "bags" of feature-vectors (instances) along with an aggregate label for each bag derived from the (usually $\\{{0,1\\}}$-valued) labels of its constituent instances. In the *learning from label proportions* (LLP) setting, the aggregate label of a bag is the average of the instance labels, whereas in *multiple instance learning* (MIL) it is the OR. The goal is to train an instance-level predictor, which is typically achieved by fitting a model on the training data, in particular one that maximizes the accuracy which is the fraction of *satisfied* bags i.e., those on which the model's induced labels are consistent with the target aggregate label. A weak learner in this context is one which has at a constant accuracy $ < 1$ on the training bags, while a strong learner's accuracy can be arbitrarily close to $1$. We study the problem of using a weak learner on such training bags with aggregate labels to obtain a strong learner, analogous to supervised learning for which boosting algorithms are known. Our first result shows the impossibility of boosting in the LLP setting using weak classifiers of any accuracy $< 1$ by constructing a collection of bags for which such weak learners (for any weight assignment) exist, while not admitting any strong learner. A variant of this construction also rules out boosting in MIL for a non-trivial range of weak learner accuracy. In the LLP setting however, we show that a weak learner (with small accuracy) on large enough bags can in fact be used to obtain a strong learner for small bags, in polynomial time. We also provide more efficient, sampling based variant of our procedure with probabilistic guarantees which are empirically validated on three real and two synthetic datasets. Our work is the first to theoretically study weak to strong learning from aggregate labels, with an algorithm to achieve the same for LLP, while proving the impossibility of boosting for both LLP and MIL.
Submission Number: 114
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview