Incentives in Federated Learning with Heterogeneous Agents

ICLR 2026 Conference Submission14570 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, incentives, mechanism design, PAC learning, sample complexity, approximation algorithms, strategyproofness, price of stability
TL;DR: We model incentives in heterogeneous-data FL, show equilibria can be arbitrarily costly, prove optimal allocation is NP-hard, give a logarithmic LP approximation, and design a strategy-proof pay-what-you-contribute mechanism.
Abstract: Federated learning promises significant sample-efficiency gains by pooling data across multiple agents, yet incentive misalignment is an obstacle: each update is costly to the contributor but boosts every participant. We introduce a game-theoretic framework that captures heterogeneous data: an agent’s utility depends on who supplies each sample, not just how many. Agents aim to meet a PAC-style accuracy threshold at minimal personal cost. We show that uncoordinated play yields pathologies: pure equilibria may not exist, and the best equilibrium can be arbitrarily more costly than cooperation. To steer collaboration, we analyze the cost-minimizing contribution vector, prove that computing it is NP-hard, and derive a polynomial-time linear program that achieves a logarithmic approximation. Finally, pairing the LP with a simple pay-what-you-contribute rule—each agent receives a payment equal to its sample cost—yields a mechanism that is strategy-proof and, within the class of contribution-based transfers, is unique.
Primary Area: learning theory
Submission Number: 14570
Loading