Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine LearningDownload PDF

May 21, 2021 (edited Oct 26, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Federated learning, Shapley value, Fairness, Incentives
  • TL;DR: A gradient-based reward mechanism to ensure Shapley-fairness in collaborative machine learning.
  • Abstract: Collaborative machine learning provides a promising framework for different agents to pool their resources (e.g., data) for a common learning task. In realistic settings where agents are self-interested and not altruistic, they may be unwilling to share data or model without adequate rewards. Furthermore, as the data/model shared by the agents may differ in quality, designing rewards which are fair to them is important so that they do not feel exploited nor discouraged from sharing. In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance. Compared to existing baselines, our approach is more efficient and does not require a validation dataset. We perform extensive experiments to demonstrate that our proposed approach achieves better fairness and predictive performance.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code:
21 Replies