Rethinking the Starting Point: Enhancing Performance and Fairness of Federated Learning via Collaborative Pre-Training

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: federated learning, pre-training
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Most existing federated learning (FL) methodologies have been developed starting from a randomly initialized model. Recently, several studies have empirically demonstrated that leveraging a pre-trained model can offer advantageous initializations for FL. In this paper, we take a departure from the assumption of centralized pre-training and instead focus on a practical FL setting, where data samples are distributed among both clients and the server even during the pre-training phase. We propose a collaborative pre-training approach for FL (CoPreFL), where the goal is to strategically design a pre-trained model that effectively serves as a good initialization for any downstream FL tasks. The key idea of our pre-training algorithm is to employ meta-learning to simulate downstream distributed scenarios, enabling it to adapt to unforeseen FL tasks. During optimization, CoPreFL also strikes a balance between average performance and fairness, with the aim of addressing the challenges in downstream FL tasks through initialization. Extensive experimental results validate that our pre-training method provides a robust initialization for any unseen downstream FL tasks, resulting in enhanced average performance and more equitable predictions.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3574
Loading