Budgeted Online Model Selection and Fine-Tuning via Federated Learning

Published: 27 Feb 2024, Last Modified: 27 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Online model selection involves selecting a model from a set of candidate models `on the fly' to perform prediction on a stream of data. The choice of candidate models henceforth has a crucial impact on the performance. Although employing a larger set of candidate models naturally leads to more flexibility in model selection, this may be infeasible in cases where prediction tasks are performed on edge devices with limited memory. Faced with this challenge, the present paper proposes an online federated model selection framework where a group of learners (clients) interacts with a server with sufficient memory such that the server stores all candidate models. However, each client only chooses to store a subset of models that can be fit into its memory and performs its own prediction task using one of the stored models. Furthermore, employing the proposed algorithm, clients and the server collaborate to fine-tune models to adapt them to a non-stationary environment. Theoretical analysis proves that the proposed algorithm enjoys sub-linear regret with respect to the best model in hindsight. Experiments on real datasets demonstrate the effectiveness of the proposed algorithm.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/pouyamghari/OFMS-FT
Assigned Action Editor: ~Novi_Quadrianto1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1652