Abstract: Federated learning (FL) is a framework of distributed machine learning, which aims to protect data privacy by transferring parameters instead of private data from local clients. Compared with the typical cloud-client architecture, applying FL on a cloud-edge-client hierarchical architecture could train the model faster and achieve better communication-computation trade-offs. However, hierarchical federated learning (HFL) still suffers from privacy leakage by analyzing uploaded parameters from clients or edge servers. To address this problem, we propose a privacy-preserving scheme based on the theory of local differential privacy (LDP), where adding the noise to the shared model parameters before uploading them to edge and cloud servers. According to our analysis by the moment accounting, the proposed algorithm can realize the strict differential privacy guarantee for the layers of clients and edge servers with adjustable privacy protection levels. We evaluate its performance based on the image classification tasks, and the result demonstrates that our theoretical analyses are consistent with simulations.
Loading