G-VCFL: Grouped Verifiable Chained Privacy-Preserving Federated Learning

Published: 01 Jan 2022, Last Modified: 10 Apr 2025IEEE Trans. Netw. Serv. Manag. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning, as a typical distributed learning paradigm, shows great potential in Industrial Internet of Things, Smart Home, Smart City, etc. It enables collaborative learning without data leaving local users. Despite the huge benefits, it still faces the risk of privacy breaches and a single point of failure for aggregation server. Adversaries can use intermediate models to infer user privacy, or even return incorrect global model by manipulating the aggregation server. To address these issues, several federated learning solutions focusing on privacy-preserving and security have been proposed. However, theses solutions still faces challenges in resource-limited scenarios. In this paper, we propose G-VCFL, a grouped verifiable chained privacy-preserving federated learning scheme. Specifically, we first use the grouped chain learning mechanism to guarantee the privacy of users, and then propose a verifiable secure aggregation protocol to guarantee the verifiability of the global model. G-VCFL does not require any complex cryptographic primitives and does not introduce noise, but enables verifiable privacy-preserving federated learning by utilizing lightweight pseudorandom generators. We conduct extensive experiments on real-world datasets by comparing G-VCFL with other state-of-the-art approaches. The experimental results and functional evaluation indicate that G-VCFL is efficient in the six experimental cases and satisfies all the intended design goals.
Loading