Incentives in Private Collaborative Machine Learning

Published: 21 Sept 2023, Last Modified: 28 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Incentives, Privacy, Shapley fairness, Collaborative machine learning, data valuation, reward, sufficient statistics
TL;DR: We propose how to value and reward parties while ensuring incentives like individual rationality, fairness and privacy (but deter excessive privacy demands).
Abstract: Collaborative machine learning involves training models on data from multiple parties but must incentivize their participation. Existing data valuation methods fairly value and reward each party based on shared data or model parameters but neglect the privacy risks involved. To address this, we introduce _differential privacy_ (DP) as an incentive. Each party can select its required DP guarantee and perturb its _sufficient statistic_ (SS) accordingly. The mediator values the perturbed SS by the Bayesian surprise it elicits about the model parameters. As our valuation function enforces a _privacy-valuation trade-off_, parties are deterred from selecting excessive DP guarantees that reduce the utility of the grand coalition's model. Finally, the mediator rewards each party with different posterior samples of the model parameters. Such rewards still satisfy existing incentives like fairness but additionally preserve DP and a high similarity to the grand coalition's posterior. We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
Supplementary Material: zip
Submission Number: 5725
Loading