Abstract: Analytic Federated Learning (AFL) is an enhanced gradient-free federated learning (FL) paradigm designed to accelerate training by updating the global model in a single step with closed-form least-square (LS) solutions. However, the obtained global model suffers performance degradation across clients with heterogeneous data distribution. Meta-learning is a common approach to tackle this problem by delivering personalized local models for individual clients. Yet, integrating meta-learning with AFL presents significant challenges: First, conventional AFL frameworks cannot support deep neural network (DNN) training which can influence the fast adaption capability of meta-learning for complex FL tasks. Second, the existing meta-learning method requires gradient information, which is not involved in AFL. To overcome the first challenge, we propose an AFL framework, namely FedACnnL, in which a layer-wise DNN collaborative training method is designed by modeling the training of each layer as a distributed LS problem. For the second challenge, we further propose an analytic personalized federated meta-learning framework, namely pFedACnnL. It generates a personalized model for each client by analytically solving a local objective which bridges the gap between the global model and the individual data distribution. FedACnnL is theoretically proven to require significantly shorter training time than the conventional FL frameworks on DNN training while the reduction ratio is $83\%\sim99\%$ in the experiment. Meanwhile, pFedACnnL excels at test accuracy with the vanilla FedACnnL by $4\%\sim8\%$ and it achieves state-of-the-art (SOTA) model performance in most cases of convex and non-convex settings compared with previous SOTA frameworks.
Loading