RHFL: a robust method to defend against poisoning attacks for heterogeneous hierarchical federated learning

Published: 2025, Last Modified: 22 Jan 2026J. Supercomput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) is a decentralized machine learning paradigm designed to address data privacy concerns by exchanging model parameters instead of raw data. Hierarchical federated learning (HFL), a form of FL, enhances communication efficiency through its client-edge-cloud hierarchy. However, HFL is vulnerable to poisoning attacks where malicious clients may corrupt the global model by manipulating local model updates. Moreover, in HFL, data heterogeneity among clients and actions by malicious clients cause their model updates to deviate from benign clients, making it a challenge to identify malicious clients. In this work, we propose a Byzantine-robust method for HFL, named RHFL, which integrates both Jensen–Shannon divergence and performance-weighted aggregation to mitigate the impact of poisoning attacks. Specifically, the model update differences of clients are evaluated by calculating the Jensen–Shannon divergence score between each client and the edge server. Subsequently, malicious clients are detected by conducting an adaptive threshold mechanism that analyzes the statistical characteristics of Jensen–Shannon divergence scores across clients. Finally, a model performance-weighted aggregation rule is developed at the edge server to enhance the robustness of HFL. Due to its high communication and computational demands, particularly in handling large-scale data and real-time updates, RHFL requires high-performance computing environments for efficient operation. Extensive experiments on four benchmark datasets demonstrate that RHFL outperforms current defenses in terms of prediction accuracy.
Loading