Abstract: The utilization of federated learning (FL) has witnessed notable advancements in the domain of edge computing (EC). However, limited edge resources and heterogeneous devices restrict the accelerated training of the FL model. To address this issue, we introduce the biological evolutionary mechanism and momentum gradient descent (MGD) update approach into FL, called the EMAFL scheme, aiming to achieve accelerated model training and maximize resource utilization, simultaneously. Specifically, we first update the local model with particle swarm optimization (PSO) for each device and perform MGD on the updated local model. Next, by a toy example, we illustrate the necessity of adopting the different number of local iterations for heterogeneous devices in a resource-limited environment. Analytical convergence of the EMAFL scheme, premised on a delineated resource budget is subsequently explored. This yields a mathematical delineation correlating the quantity of local iterations for heterogeneous devices with the optimal model parameters. Predicated on the prior theoretical examinations, an adaptive control algorithm is devised to ascertain the local iteration count pertinent to each device following every communication round. Finally, through a lot of experiments compared with the benchmarks, the advantages of EMAFL in model accuracy, resource consumption, and Non-IID issues are verified.
Loading