Group-wise oracle-efficient algorithms for online multi-group learning

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multi-group learning, online learning, oracle-efficient
TL;DR: We develop algorithms for achieving sublinear regret in online multi-group learning when the collection of groups is exponentially large or infinite.
Abstract: We study the problem of online multi-group learning, a learning model in which an online learner must simultaneously achieve small prediction regret on a large collection of (possibly overlapping) subsequences corresponding to a family of groups. Groups are subsets of the context space, and in fairness applications, they may correspond to subpopulations defined by expressive functions of demographic attributes. In this paper, we design such oracle-efficient algorithms with sublinear regret under a variety of settings, including: (i) the i.i.d. setting, (ii) the adversarial setting with smoothed context distributions, and (iii) the adversarial transductive setting.
Primary Area: Learning theory
Submission Number: 19047
Loading