Abstract: Federated learning is a new type of artificial intelligence technology. During the training process, the client transmits model parameter information instead of local data to ensure their privacy and security. But it also incurs higher communication costs. This article proposes a new federated learning pruning method, FedADP, with the aim of adaptively determining pruning ratios for each layer in each client model without infringing on client privacy, and achieving more accurate pruning effects. Our method not only reduces communication costs during the training process, but also maintains accuracy similar to the original model. We conducted experimental validation using classic models and datasets, and evaluated our scheme and traditional federated learning scheme in terms of model accuracy, communication cost, and computational cost.
Loading