## Generalization Bounds for Federated Learning: Fast Rates, Unparticipating Clients and Unbounded Losses

**Keywords:**Federated learning, Generalization error, Risk bound, Unbounded losses, Learning theory

**Abstract:**In {federated learning}, the underlying data distributions may be different across clients. This paper provides a theoretical analysis of generalization error of {federated learning}, which captures both heterogeneity and relatedness of the distributions. In particular, we assume that the heterogeneous distributions are sampled from a meta-distribution. In this two-level distribution framework, we characterize the generalization error not only for clients participating in the training but also for unparticipating clients. We first show that the generalization error for unparticipating clients can be bounded by participating generalization error and participating gap caused by clients' sampling. We further establish fast learning bounds of order $\mathcal{O}(\frac{1}{mn} + \frac{1}{m})$ for unparticipating clients, where $m$ is the number of clients and $n$ is the sample size at each client. To our knowledge, the obtained fast bounds are state-of-the-art in the two-level distribution framework. Moreover, previous theoretical results mostly require the loss function to be bounded. We derive convergence bounds of order $\mathcal{O}(\frac{1}{\sqrt{mn}} + \frac{1}{\sqrt{m}})$ under unbounded assumptions, including sub-exponential and sub-Weibull losses.

**Anonymous Url:**I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.

**No Acknowledgement Section:**I certify that there is no acknowledgement section in this submission for double blind review.

**Code Of Ethics:**I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics

**Submission Guidelines:**Yes

**Please Choose The Closest Area That Your Submission Falls Into:**Theory (eg, control theory, learning theory, algorithmic game theory)

37 Replies

Loading