Abstract: Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack. However, the inner-maximization optimization of Adversarial Training can exacerbate the data heterogeneity among local clients, which triggers the pain points of Federated Learning. This makes that the straightforward combination of two paradigms shows the performance deterioration as observed in previous works. In this paper, we introduce an $\alpha$-Weighted Federated Adversarial Training ($\alpha$-WFAT) method to overcome this problem, which relaxes the inner-maximization of Adversarial Training into a lower bound friendly to Federated Learning. We present the theoretical analysis about this $\alpha$-weighted mechanism and its effect on the convergence of FAT. Empirically, the extensive experiments are conducted to comprehensively understand the characteristics of $\alpha$-WFAT, and the results on three benchmark datasets demonstrate $\alpha$-WFAT significantly outperforms FAT under different adversarial learning methods and federated optimization methods.
21 Replies
Loading