Abstract: federated learning (FL) is increasingly adopted in Internet of Things (IoT) ecosystems, where distributed devices collaboratively train machine learning models while preserving data privacy. Well-trained models have high commercial value. If stolen, it will severely harm the interests of the model owner. In FL, a free-rider client can avoid contributing data or computing resources by establishing a deceptive local model and illegally obtaining the valuable global model for free, undermining the central server’s interests. While existing model watermarking methods primarily concentrate on identifying deep learning model misuse, they fail to adequately tackle the issue of identifying free-riders. To address such an issue, this article presents a box-free watermarking scheme that enables multiple clients who participated in the training to embed private watermarks within the jointly trained federated deep learning model, while the free rider cannot if he did not participate in the training. To avoid conflicts between different clients, each client selects a unique trigger class and embeds watermarks into the global model during the training process. Furthermore, we propose a memory-enhancing local updating strategy to effectively fuse different watermarks into the global model. The proposed method can assist the center in identifying free-rider clients while also safeguarding the FL model’s intellectual property rights. The efficiency of the embedded watermarks is validated by experiments conducted on different models, and the performance of the resilience across various training settings and the robustness against different watermark removal methods are also tested.
External IDs:dblp:journals/iotj/LiZWFZ25
Loading