Decentralized Federated Learning using Gaussian Processes

Published: 01 Jan 2023, Last Modified: 17 Jan 2025MRS 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Gaussian process (GP) training of kernel hyperparameters still remains a major challenge due to high computational complexity. The typical GP training method employs maximum likelihood estimation to solve an optimization problem that requires cubic computations for each iteration. In addition, GP training in multi-agent systems requires significant amount of inter-agent communication that typically involves sharing of local data. In this paper, we propose a scalable optimization algorithm for decentralized learning of GP hyperparameters in multi-agent systems. To distribute the implementation of GP training, we employ the alternating direction method of multipliers (ADMM). We provide a closed-form solution of the nested optimization of decentralized proximal ADMM for the case of GP modeling with the separable squared exponential kernel. Decentralized federated learning is promoted by prohibiting local data exchange between agents. The efficiency of the proposed method is illustrated with numerical experiments.
Loading