Keywords: Federated learning, online learning
TL;DR: We demonstrates that federated learning is necessary for online linear regression with decentralized data in the case of limited resource, breaking away from the pessimistic result of federated learning being unnecessary in full information setting.
Abstract: In this paper, we study the necessity of federated learning (FL) for online linear regression with decentralized data. Previous work proved that FL is unnecessary for minimizing regret in full information setting, while we prove that it can be necessary if only limited attributes of each instance are observed. We call this problem online sparse linear regression with decentralized data (OSLR-DecD). We propose a federated algorithm for OSLR-DecD, and prove a lower bound on the regret of any noncooperative algorithm. In the case of $d=o(M)$, the upper bound on the regret of our algorithm is smaller than the lower bound, demonstrating the necessity of FL, in which $M$ is the number of clients and $d$ is the dimension of data. When $M=1$, we give the first lower bound on the regret and improve previous upper bounds. We invent three new techniques including an any-time federated online mirror descent with negative entropy regularization, a paradigm for client-server collaboration with privacy protection, and a reduction from online sparse linear regression to prediction with limited advice for establishing the lower bound on the regret, some of which might be of independent interest.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6267
Loading