Abstract: Partial Personalized Federated Learning (PFL) aims to balance generalization and person-
alization by decoupling models into shared and personalized layers. However, existing meth-
ods typically rely on rigid, static partitioning, which leads to significant global-local model
discrepancies, client drift, and catastrophic forgetting. To overcome these limitations, we
propose pMixFed, a dynamic, layer-wise PFL approach that integrates an adaptive mixing
mechanism (inspired by Mixup) directly into the parameter space. Unlike static methods,
pMixFed employs an adaptive strategy to dynamically partition layers and utilizes a gradual
transition of personalization degrees to smooth the integration of global and local knowledge.
This mechanism effectively mitigates the “hard split” issues found in prior work. Exten-
sive experiments demonstrate that pMixFed consistently outperforms competitive baselines
(such as FedAlt and FedSim) in heterogeneous settings, exhibiting faster model training, in-
creased robustness against performance drops, and a self-tuning mechanism that effectively
handles cold-start users.
Submission Type: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=BrSOGUn8b7&nesting=2&sort=date-desc
Changes Since Last Submission: The previous submission was desk rejected because it contained an acknowledgements section, which violates the double-blind policy. We have removed it and are resubmitting the corrected version.
Assigned Action Editor: ~Zachary_B._Charles1
Submission Number: 7285
Loading