Masked Random Noise for Communication-Efficient Federated Learning

Published: 20 Jul 2024, Last Modified: 05 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning is a promising distributed machine learning paradigm that can effectively protect data privacy. However, it may involve significant communication overhead, thereby potentially impairing training efficiency. In this paper, we aim to enhance communication efficiency from a new perspective. Specifically, we request the distributed clients to find optimal model updates relative to global model parameters within predefined random noise. For this purpose, we propose **Federated Masked Random Noise (FedMRN)**, a novel framework that enables clients to learn a 1-bit mask for each model parameter and apply masked random noise (i.e., the Hadamard product of random noise and masks) to represent model updates. To make FedMRN feasible, we propose an advanced mask training strategy, called progressive stochastic masking (*PSM*). After local training, clients only transmit local masks and a random seed to the server. Additionally, we provide theoretical guarantees for the convergence of FedMRN under both strongly convex and non-convex assumptions. Extensive experiments are conducted on four popular datasets. The results show that FedMRN exhibits superior convergence speed and test accuracy compared to relevant baselines, while attaining a similar level of accuracy as FedAvg.
Primary Subject Area: [Systems] Systems and Middleware
Relevance To Conference: Our research contributes to the field of multimedia systems, specifically focusing on distributed multimedia systems. Given the escalating volume of multimedia data stored on contemporary mobile devices and IoT infrastructures, effectively utilizing this diverse multimodal data while maintaining user privacy has emerged as a crucial concern. Federated learning presents itself as a promising solution for preserving data privacy in distributed training systems. However, within the framework of federated learning, utilizing multimedia data for training necessitates frequent communication between clients and central servers. This substantial communication overhead significantly impedes the overall training efficiency of multimedia systems. Consequently, in this study, we introduce FedMRN as a solution to enhance the communication efficiency of federated learning, offering a potential avenue for improving the training efficiency of distributed multimedia systems.
Supplementary Material: zip
Submission Number: 1230
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview