Keywords: communication-efficient federated learning
Abstract: Federated learning (FL) is deemed as a promising privacy-preserving distributed learning paradigm for decentralized non-IID data. However, it inevitably introduces significant communication overhead caused by frequent gradient exchanges, limiting its scalability. To mitigate this issue, recent work proposes to use data distillation for communication costs reduction.
Yet, the existing approaches fail to fully exploit the distilled features, resulting in a suboptimal compression ratio. In this paper, we propose a novel method—Overlapped Feature Synthesis(OFS)—that enables global feature sharing during compression, enhancing both communication efficiency and model performance. Specifically, we introduce a global feature Sampler, which extracts several small feature maps from a large global feature map to enable parameter sharing. To balance global and personalized parameters, an offset coefficient and multiple sampling strategies are introduced to allow for a flexible trade-off between compression efficiency and model performance.Extensive experiments demonstrate that OFS achieves better convergence with a lower compression rate compared to competing methods. Compared to state-of-the-art data distillation methods, our approach achieves an approximately 1\% improvement in accuracy while maintaining a 10\% higher compression rate. Moreover, we conduct ablation studies and visualizations to investigate the effects of the offset coefficient, the number of clients, and the number of local training epochs on the effectiveness of our method. Furthermore, we analyze the relationship between global and personalized model parameters.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 11116
Loading