MODULAR FEDERATED CONTRASTIVE LEARNING WITH PEER NORMALIZATIONDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: federated learning, contrastive learning, normalization
Abstract: Despite recent progress in federated learning (FL), the fundamental challenge of training a global model across multiple clients having heterogeneous and class imbalanced (CIB) data has not been fully resolved. Furthermore, most of the existing works for FL with heterogeneous data assume that the clients have fully labeled data, which might not be practical in real-world scenarios due to the challenges of labeling, especially at the clients. In this paper, we provide a solution for the realistic FL setting in which the clients have unlabeled, heterogeneous, and CIB data. To address the issue of biased gradients in training on heterogeneous and CIB data, we develop a new FL framework, called the Modular Federated Contrastive Learning (MFCL). Instead of federally training a whole deep network across the clients, we propose to train two separate and different network modules at the clients and the server. One is a sensor module that is federally trained across the clients to extract the data representations from the clients’ unlabeled data, which are sent to the server. The other is a discriminator module at the server, which is trained with contrastive loss on the data representations received from the clients. We also propose a new normalization technique, Peer Normalization (PN), which is customized for the contrastive FL to reduce the biases of the gradients resulting from training on the heterogeneous and CIB data across the clients. Our experiments show that the proposed MFCL with PN provides high and stable accuracy, achieving state-of-the-art performance when the clients have (severe) heterogeneous and CIB data.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
Supplementary Material: zip
6 Replies

Loading