On the Necessity of Collaboration for Online Model Selection with Decentralized Data

Published: 25 Sept 2024, Last Modified: 26 Dec 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: online learning, model selection, federated learning, kernel methods
TL;DR: We clarify the unnecessary nature of collaboration in previous federated online model selection algorithms, and give conditions under which collaboration is necessary.
Abstract: We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients. Previous work proposed various federated algorithms without demonstrating their necessity, while we answer the question from a novel perspective of computational constraints. We prove lower bounds on the regret, and propose a federated algorithm and analyze the upper bound. Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to $o(K)$, where $K$ is the number of candidate hypothesis spaces. We clarify the unnecessary nature of collaboration in previous federated algorithms for distributed online multi-kernel learning, and improve the regret bounds at a smaller computational and communication cost. Our algorithm relies on three new techniques including an improved Bernstein's inequality for martingale, a federated online mirror descent framework, and decoupling model selection and prediction, which might be of independent interest.
Supplementary Material: zip
Primary Area: Online learning
Submission Number: 8672
Loading