Abstract: In recent years, the issue of data privacy has attracted more and more attention. Federated learning is a practical solution to train the model while guaranteeing data privacy. It has two main characteristics: the first is that the data in the clients is usually non-IID, and the second is that the data of each client cannot be shared. However, due to the non-IID data of each client, the optimal solution of the client is often inconsistent with the global optimal solution. The non-IID data often causes the client to optimize along the local optimal direction and drift out of the global optimal solution during training. Due to the client drift problem, the server tends to converge slowly so that the overall communication efficiency of federated learning is usually limited. To improve the communication efficiency of federated learning, in this paper, we propose a new federated learning framework which integrates multi-level prospective correction factor in the training procedure of server and clients. We propose the global prospective correction factor in server aggregation to reduce model communication rounds and accelerate convergence. In client training, we introduce the local prospective correction factor to alleviate client drift. Both global and local prospective correction factors are integrated into a unified federated learning framework to further improve the communication efficiency. Extensive experiments conducted on several datasets demonstrate that our method can effectively improve the communication efficiency and is robust to different federated learning environments.
Loading