TL;DR: We design a mechanism for federated learning which is fair and encourages high accuracy, welfare, and data collection.
Abstract: Federated learning (FL) is a popular collaborative learning paradigm, whereby agents with individual datasets can jointly train an ML model.
While higher data sharing improves model accuracy and leads to higher payoffs, it also raises costs associated with data acquisition or loss of privacy, causing agents to be strategic about their data contribution.
This leads to undesirable behavior at a Nash equilibrium (NE) such as *free-riding*, resulting in sub-optimal fairness, data sharing, and welfare.
To address this, we design $\mathcal{M}^{Shap}$, a budget-balanced payment mechanism for FL, that admits Nash equilibria under mild conditions, and achieves *reciprocal fairness*: where each agent's payoff equals her contribution to the collaboration, as measured by the Shapley share.
In addition to fairness, we show that the NE under $\mathcal{M}^{Shap}$ has desirable guarantees in terms of accuracy, welfare, and total data collected.
We validate our theoretical results through experiments, demonstrating that $\mathcal{M}^{Shap}$ outperforms baselines in terms of fairness and efficiency.
Lay Summary: (1) Federated learning (FL) is a popular collaborative learning paradigm, but the cost of data sharing causes agents to be strategic about their data contribution. (2) We design MShap -- a budget-balanced payment mechanism for FL that admits Nash equilibria under mild conditions. (3) Our mechanism achieves reciprocal fairness and also has desirable guarantees in terms of accuracy, welfare, and total data collected. The theoretical results are validated on real-world datasets.
Primary Area: Social Aspects->Fairness
Keywords: Federated learning, Nash equilibrium, mechanism design, fairness
Submission Number: 8749
Loading