FedMEKT: Split Multimodal Embedding Knowledge Transfer in Federated LearningDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Semi-supervised learning, Multimodal Learning, Federated Learning, Knowledge Transfer
TL;DR: The paper designs the split embedding knowledge transfer mechanism for multimodal federated learning under semi-supervised learning setting
Abstract: Federated Learning (FL) enables a decentralized machine-learning paradigm to collaboratively train a generalized global model without sharing users' private data. However, most existing FL approaches solely utilize single-modal data, thus limiting the systems for exploiting valuable multimodal data in future personalized applications. Furthermore, most FL methods still rely on the labeled data at the client side, which is limited in real-world applications due to the inability of data self-annotation from users. To leverage the representations from different modalities in FL, we propose a novel multimodal FL framework with a semi-supervised learning setting. Specifically, we develop the split multimodal embedding knowledge transfer mechanism in federated learning, namely, FedMEKT, which enables the personalized and generalized multimodal representations exchange between server and clients using a small multimodal proxy dataset. Hence, FedMEKT iteratively updates the generalized encoders from the collaborative embedding knowledge of each client, such as modality-averaging representations. Thereby, a generalized encoder could guide personalized encoders to enhance the generalization abilities of client models; afterward, personalized classifiers could be trained using the proxy labeled data to perform supervised tasks. Through the extensive experiments on three multimodal human activity recognition tasks, we demonstrate that FedMEKT achieves superior performance in both local and global encoder models on linear evaluation and guarantees user privacy for personal data and model parameters.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
Supplementary Material: zip
6 Replies

Loading