Keywords: Incentive, Data Contribution, Personalization, Federated Learning
TL;DR: We propose an incentivized and inclusive model sharing market on utilizing decentralized private data
Abstract: While data plays a crucial role in training contemporary AI models, it is acknowledged that valuable public data will be exhausted in a few years, directing the world's attention towards the massive decentralized private data.
However, the privacy-sensitive nature of raw data and lack of incentive mechanism prevent these valuable data from being fully exploited.
Addressing these challenges, this paper proposes inclusive and incentivized personalized federated learning (iPFL), which incentivizes data holders with diverse purposes to collaboratively train personalized models without revealing raw data.
iPFL constructs a model-sharing market by solving a graph-based training optimization and incorporates an incentive mechanism based on game theory principles.
Theoretical analysis shows that iPFL adheres to two key incentive properties: individual rationality and truthfulness.
Empirical studies on eleven AI tasks (e.g., large language models' instruction-following tasks) demonstrate that iPFL consistently achieves the highest economic utility, and better or comparable model performance compared to baseline methods.
We anticipate that our iPFL can serve as a valuable technique for boosting future AI models on decentralized private data while making everyone satisfied.
Submission Number: 63
Loading