FedMLU: Mitigating Source Inference Attacks in Federated Learning Without Losing Utility for Secure IoT Services
Abstract: Federated Learning (FL) addresses the growing concerns of Internet of Things (IoT) service security and privacy in edge computing environments by enabling collaborative model training without the need to centralize sensitive data. Most existing FL frameworks remain vulnerable to sophisticated threats such as source inference attacks (SIAs), which exploit model updates to infer sensitive information about participating clients, thereby compromising the integrity and security of edge services. To mitigate such attacks and ensure service security and user privacy, various defensive methods, such as RM Learning and RelaxLoss, have been proposed. However, these methods fail to provide effective privacy protection in practical FL scenarios characterized by non-IID data distributions. To address this issue, we propose FedMLU, a novel algorithm designed to counter SIAs effectively. Specifically, FedMLU combines a model alternating update strategy with the RelaxLoss algorithm to minimize the loss discrepancy among samples, thereby reducing the distinguishability exploited by SIAs. Furthermore, distinct soft labels are assigned for training each federated participant model, aiming to decrease the model's prediction confidence and enhance privacy protection. Extensive experiments on synthetic datasets and various real-world datasets demonstrate that our method achieves better defense performance and a more favorable tradeoff between privacy protection and model utility compared to the state-of-the-art RelaxLoss and two popular FL frameworks, particularly in scenarios with data heterogeneity.
External IDs:dblp:conf/icws/CuiCGXYLX25
Loading