FedPGT: Prototype-based Federated Global Adversarial Training against Adversarial Attack

Published: 01 Jan 2024, Last Modified: 13 Nov 2024CSCWD 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning, an innovative distributed machine learning paradigm, is designed to address critical concerns related to data silos and user data privacy breaches. However, it faces a significant challenge in the form of adversarial attacks. Recent research has attempted to mitigate this issue through techniques such as local adversarial training and model distillation. Nevertheless, these approaches are susceptible to real-world variations, ultimately leading to compromised adversarial robustness. In this paper, we propose FedPGT, an innovative approach that employs clustering techniques to assess the convergence of the model. By leveraging a prototype-based method, it guides high-quality adversarial training. FedPGT alleviates the issue of data heterogeneity in federated learning and enhances the model’s adversarial robustness. Our experimental results, conducted across three distinct datasets (MNIST, FMNIST, and EMNIST-Digits), demonstrate the efficacy of FedPGT.
Loading