Resilient Federated Adversarial Learning With Auxiliary-Classifier GANs and Probabilistic Synthesis for Heterogeneous Environments
Abstract: Recently, collaborative learning paradigms like Federated Learning (FL) are gaining significant attention as a means of deploying artificial intelligence (AI)-based Internet of Things (IoT) applications. This is due to the fact that participants keep their heterogeneous data on their local devices and share only model updates with the central server. As a result of FL, new challenges arise, such as vulnerabilities to unknown data and adversarial samples, as well as security risks associated with inference, which may expose the system to potential evasion attacks. In this article, we introduce Auxiliary Federated Adversarial Learning (AuxiFed) as a solution to address these serious challenges. AuxiFed synthesizes data by using pre-trained auxiliary-classifier generative adversarial networks (AC-GANs) and probabilistic logic, enhancing model resilience and promoting accurate predictions while safeguarding against adversarial attacks. By leveraging locally trained models, AuxiFed provides representative and diverse synthetic samples for model updates during FL based on the pre-trained AC-GAN generators of individual clients. By merging these synthetic samples with real data during training, we foster diversity of data and improve the model’s ability to generalize to unknown data. In two distinct environments, with homogeneous and heterogeneous data, we train the model on two datasets, MNIST and EMNIST. Different adversarial evasion attacks are tested, as well as scenarios without attacks. The AuxiFed algorithm is also bolstered using robust adversarial techniques, and subsequently compared with the baseline algorithms. AuxiFed generally outperforms Federated Averaging (FedAvg), FL with Variational Autoencoders (FedAvg+VAEs), and FL with Conditional Generative Adversarial Networks (FedAvg+C-GANs) in terms of accuracy, generalization, and robustness. Comparatively to baseline methods, including FedAvg, FedAvg+VAE, and FedAvg+C-GAN, it shows better convergence during training and better performance on unknown data. Various adversarially trained variants of AuxiFed, such as AuxiFed-PGD and AuxiFed-FGSM, also outperform the previously mentioned baseline methods, along with their robust variants. As a result, AuxiFed enhances the performance of models, provides resilience against adversarial attacks, and can generalize to unknown data.
External IDs:dblp:journals/tnsm/HaghbinBTP25
Loading