Decentralized Federated Learning with Function Space Regularization

TMLR Paper4870 Authors

16 May 2025 (modified: 27 May 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this work we propose FedFun, a novel framework for decentralized federated learning that enforces consensus across clients in function space rather than parameter space. By framing agreement as a regularization penalty in a Hilbert space of hypotheses, our method allows optimization using proximal gradient updates that encourage similarity between neighboring models while supporting both parametric and non-parametric learners. This function space perspective yields theoretical advantages, including broad convergence guarantees even when individual client objectives are non-convex in parameter space, and improved robustness to client heterogeneity. We provide convergence analysis under mild assumptions, demonstrate compatibility with models like neural networks and decision trees, and empirically evaluate implementations of FedFun on various sample datasets.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Arya_Mazumdar1
Submission Number: 4870
Loading