FedMPS: A Robust Differential Privacy Federated Learning Based on Local Model Partition and Sparsification for Heterogeneous IIoT Data

Published: 01 Jan 2025, Last Modified: 12 Nov 2025IEEE Internet Things J. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the emerging Industrial Internet of Things (IIoT) applications, federated learning (FL) enables model training without the need to transmit raw data directly. Nevertheless, transmitting model parameters could still reveal private information. To further protect local model parameters, differential privacy combined with FL (DPFL) has been introduced. Nonetheless, adding noise in DPFL can severely impact model performance, especially in non-independent and identically distributed (non-iid) data scenarios typical of IIoT environments. It is necessary to carefully balance privacy preservation and utility. In this article, we propose a robust DPFL scheme leveraging local model partition and sparsification (namely, FedMPS) for heterogeneous IIoT scenarios. The local model is divided into a shared part, which is sparsified before adding noise to mitigate its impact, and a private part that remains on the client. We provide a theoretical analysis of the privacy guarantees. Extensive experiments on common datasets, including Fashion-MNIST, CIFAR-10, and CIFAR-100, demonstrate that the proposed approach achieves a better privacy-utility tradeoff, with a 10%–20% improvement compared to baseline methods, and performs well especially in non-iid scenarios.
Loading