Adaptive Model Pruning in Federated Learning through Loss Exploration

Published: 18 Jun 2024, Last Modified: 10 Jul 2024WANT@ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Complexity Reduction, Federated Learning, Pruning, Knowledge transfer, Non-IID data, Deep Learning
TL;DR: This paper introduces AutoFLIP, a novel federated learning framework that employs federated loss exploration to inform adaptive pruning, enhancing model efficiency and accuracy in non-IID environments.
Abstract: The rapid proliferation of smart devices coupled with the advent of 6G networks has profoundly reshaped the domain of collaborative machine learning. Alongside growing privacy-security concerns in sensitive fields, these developments have positioned federated learning (FL) as a pivotal technology for decentralized model training. Despite its vast potential, FL encounters challenges such as elevated communication costs, computational constraint, and the complexities of non-IID data distributions. We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive model pruning. This innovative mechanism automatically identifies and prunes unimportant model parameters by distilling knowledge on model gradients behavior across different non-IID client losses, thereby optimizing computational efficiency and enhancing model performance on resource-constrained scenarios. Extensive experiments across various datasets and FL tasks reveal that AutoFLIP not only efficiently accelerates global convergence but also achieves superior accuracy and robustness compared to traditional methods. On average, AutoFLIP reduces computational overhead by 48.8% and communication costs by 35.5%, while maintaining high accuracy. By significantly reducing these overheads, AutoFLIP offer the way for efficient FL deployment in real-world applications, from healthcare to smart cities.
Submission Number: 15
Loading