Initializing Services in Interactive ML Systems for Diverse Users

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Algorithm design for multi-service ML systems, initialization, clustering, approximation ratio, preference learning
TL;DR: We study the problem of initializing multiple services for a provider catering to a user base with diverse preferences, developing algorithms with approximation ratio guarantees on average and worst case losses.
Abstract: This paper investigates ML systems serving a group of users, with multiple models/services, each aimed at specializing to a sub-group of users. We consider settings where upon deploying a set of services, users choose the one minimizing their personal losses and the learner iteratively learns by interacting with diverse users. Prior research shows that the outcomes of learning dynamics, which comprise both the services' adjustments and users' service selections, hinge significantly on the initial conditions. However, finding good initial conditions faces two main challenges: (i) \emph{Bandit feedback:} Typically, data on user preferences are not available before deploying services and observing user behavior; (ii) \emph{Suboptimal local solutions:} The total loss landscape (i.e., the sum of loss functions across all users and services) is not convex and gradient-based algorithms can get stuck in poor local minima. We address these challenges with a randomized algorithm to adaptively select a minimal set of users for data collection in order to initialize a set of services. Under mild assumptions on the loss functions, we prove that our initialization leads to a total loss within a factor of the \textit{globally optimal total loss,with complete user preference data}, and this factor scales logarithmically in the number of services. This result is a generalization of the well-known $k$-means++ guarantee to a broad problem class which is also of independent interest. The theory is complemented by experiments on real as well as semi-synthetic datasets.
Supplementary Material: zip
Primary Area: Active learning
Submission Number: 15091
Loading