Abstract: Collaborative learning enables multiple participants to learn a single global model by exchanging focused updates instead of sharing data. One of the core challenges in collaborative learning is ensuring that participants are rewarded fairly for their contributions, which entails two key sub-problems: contribution assessment and reward allocation. This work focuses on fair reward allocation, where the participants are incentivized through model rewards - differentiated final models whose performance is commensurate with the contribution. In this work, we leverage the concept of slimmable neural networks to collaboratively learn a shared global model whose performance degrades gracefully with a reduction in model width. We also propose a post-training fair allocation algorithm that determines the model width for each participant based on their contributions. We theoretically study the convergence of our proposed approach and empirically validate it using extensive experiments on different datasets and architectures. We also extend our approach to enable training-time model reward allocation.
Lay Summary: Collaborative learning allows multiple participants, such as devices, organizations, or users, to work together to train a shared machine learning model without sharing their private data. One of the biggest challenges in this setting is making sure everyone is rewarded fairly based on how much they actually contributed to the final model. This paper focuses on solving this fairness problem.
Instead of giving everyone the same model, we propose giving each participant a personalized version of the final model, where the model's size (or "width") reflects their contribution. To do this, we use a special kind of technique called a slimmable neural network, which can flexibly adjust its width without needing to be retrained from scratch. After training, we use a new algorithm to decide what width each participant deserves, so that those who contributed more get better-performing models.
We also show that this method is theoretically sound and performs well in practice. Finally, we extend our approach so that fair model allocation can happen not only after training, but also during the training process itself. This helps ensure participants are fairly treated throughout the learning process.
Link To Code: https://github.com/tnurbek/aequa
Primary Area: Optimization->Large Scale, Parallel and Distributed
Keywords: federated learning, fairness, collaborative fairness
Submission Number: 1600
Loading