Disentangled Knowledge Transfer: A New Perspective for Personalized Federated LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Personalized Federated Learning, Model Disentanglement, Multi-task Learning
Abstract: Personalized federated learning (pFL) is to collaboratively train non-identical machine learning models for different clients to adapt to their heterogeneously distributed datasets. State-of-the-art pFL approaches pay much attention on exploiting clients' inter-similarities to facilitate the collaborative learning process, meanwhile, can barely escape from the irrelevant knowledge pooling that is inevitable during the aggregation phase (e.g., inconsistent classes among clients), and thus hindering the optimization convergence and degrading the personalization performance. To tackle with such conflicts from facilitating collaboration against promoting personalization, we propose a novel pFL framework, dubbed pFedC, to disentangle the global aggregated knowledge into several compositional branches and only aggregate relevant branches for supporting conflicts-aware collaboration among contradictory clients. Specifically, by reconstructing each local model into a shared feature extractor and multiple disentangled task-specific classifiers, the training on each client transforms into a mutually reinforced and relatively independent multi-task learning process, which provides a new perspective for pFL. Besides, we conduct a personalized aggregation mechanism on disentangled classifiers via quantifying the combination weights for each client to capture clients' common prior, as well as mitigate potential conflicts from the divergent knowledge caused by the heterogeneous data. Extensive empirical experiments are conducted over various models and datasets to verify the effectiveness and superior performance of the proposed algorithm.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
TL;DR: We present, pFedC, a novel training method for personalized Federated Learning which can avoid the irrelevant knowledge aggregation from other clients.
Supplementary Material: zip
5 Replies

Loading