Performance Adjustment for Federated Learning Marketplace

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Federated Learning, Incentive Mechanism
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: In federated learning, client participation is mainly motivated by performance-gain rewards or monetary rewards. In practice, different clients may have varying preferences over these two types of rewards. However, optimizing the training process to align model performance and monetary rewards with client expectations remains an open challenge. To accommodate diverse reward preferences, we propose Alpha-Tuning, an FL performance adjustment framework guided by dynamic validation loss composition. The core of our framework is a mechanism to decide the weights assigned to clients' local validation loss, each of which is determined by the corresponding client's performance contribution in the given training round and its monetary quotation for biasing this FL course towards its favor. The training hyper-parameters and model aggregation weights are adjusted together with model parameters to minimize the weighted sums of clients' local validation losses in our framework. Paired with a payment rule designed to compensate the clients according to their data contribution, Alpha-Tuning balances the clients' preferences between the performance gain and monetary reward. We demonstrate the effectiveness of our framework by conducting experiments on the federated learning tasks under various client quotation settings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6984
Loading