What Do We Mean by Generalization in Federated Learning?Download PDF

Published: 28 Jan 2022, Last Modified: 04 May 2025ICLR 2022 PosterReaders: Everyone
Keywords: Federated Learning, generalization, heterogeneity
Abstract: Federated learning data is drawn from a distribution of distributions: clients are drawn from a meta-distribution, and their data are drawn from local data distributions. Generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from unseen client distributions (participation gap). In this work, we propose a framework for disentangling these performance gaps. Using this framework, we observe and explain differences in behavior across natural and synthetic federated datasets, indicating that dataset synthesis strategy can be important for realistic simulations of generalization in federated learning. We propose a semantic synthesis strategy that enables realistic simulation without naturally partitioned data. Informed by our findings, we call out community suggestions for future federated learning works.
One-sentence Summary: We propose a framework for better measuring generalization and heterogeneity in federated learning, apply it for extensive empirical evaluation across six tasks, and make a series of recommendations for future FL works.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/what-do-we-mean-by-generalization-in/code)
19 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview