Optimal Mechanism Design for Heterogeneous Client Sampling in Federated Learning

Published: 01 Jan 2024, Last Modified: 17 Jul 2025IEEE Trans. Mob. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) provides a collaborative paradigm for distributedly training a global model while protecting clients’ privacy. In addition to communication bottlenecks and non-i.i.d. data distributions, the FL framework introduces two fundamental economic challenges: first, clients are self-interested and strategic in practice, requiring specific incentives to participate in FL; second, each client can misreport its private information to its advantage. Although existing studies have proposed economic mechanisms, they are often restricted to a “binary” participation scenario, leading to communication overheads or biased models due to client heterogeneity. In this paper, we first analyze the convergence bound under arbitrary client sampling probability with a varying number of clients. Then, we consider an optimal mechanism design problem: the FL convergence bound minimization subject to budget constraint, incentive compatibility, and individual rationality. We derive the optimal sampling probability function in a close form. To overcome the unknown prior distribution challenge, we introduce a prior-independent mechanism design, and show how it gradually learns cost distributions by exploiting the incentive compatibility property. We perform extensive experiments and show that, while outperforming the uniform sampling scheme, two proposed schemes (prior-based and prior-independent ones) perform closely to the ideal complete information upper bound.
Loading