Abstract: In this work we propose FedFun, a framework for decentralized federated learning that enforces consensus across clients in function space rather than parameter space. By framing agreement as a regularization penalty in a Hilbert space of hypotheses, our method allows
optimization using proximal gradient updates that encourage similarity between neighboring models while supporting both parametric and non-parametric learners. This function space perspective enables convergence guarantees under modest assumptions, covering situations where client objectives are non-convex in the usual sense and where clients may utilize differing architectures. In addition to convergence analysis, we demonstrate compatibility with models like neural networks and decision trees, and empirically evaluate implementations of FedFun on various sample datasets.
Submission Type: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=JPlm1i4sxb
Changes Since Last Submission: Additional baselines and cost discussion added to experiment section, hyperparameter sensitivity and dimensionality analyses added to appendix, clarification of risk formulations and applicability of theory
Assigned Action Editor: ~Arya_Mazumdar1
Submission Number: 7485
Loading