DPG-FairFL: A Dual-Phase GAN-Based Defense Framework Against Image-Based Fairness Data Poisoning Attacks in Federated Learning

Published: 2024, Last Modified: 29 Jan 2026ICA3PP (4) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Algorithmic fairness, which emphasizes that a machine learning model should not discriminate against any demographic groups, has garnered increasing attention in the context of federated learning. However, existing work primarily focuses on improving algorithmic fairness in FL models under a cooperative and secure environment, overlooking potential threats posed by adversarial attacks on the fairness of FL models. Therefore, our work pioneers the exploration of these threats and proposes corresponding defense strategies. Specifically, we first introduce three advanced image-based fairness data poisoning attacks that significantly compromise the fairness of FL models. Then, we propose DPG-FairFL, a novel defense framework designed to effectively counter these fairness attacks in FL. Our experimental results on the CelebA dataset demonstrate the exceptional effectiveness of DPG-FairFL in defending against all three fairness data poisoning attacks in FL.
Loading