Privacy-Preserving and Fairness-Aware Federated Learning for Critical Infrastructure Protection and Resilience

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24EveryoneRevisionsBibTeX
Keywords: privacy preservation, decentralized federated learning, fairness, AI security
TL;DR: This study proposes Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters.
Abstract: The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources. Federated learning – which enables multiple participants to collaboratively train a model without aggregating the training data – becomes a viable technology. However, the global model parameters that have to be shared for optimization are still susceptible to training data leakage. In this work, we propose Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set in the neighborhood of the global minimum of the objective function. As such, each participant can independently initiate its own private global model (referred to as the confined model), and collaboratively learn it towards the optimum. The updates to their own models are worked out in a secure collaborative way during the training process. In such a manner, CGD retains the ability of learning from distributed data but greatly diminishes information sharing. Such a strategy also allows the proprietary confined models to adapt to the heterogeneity in federated learning, providing inherent benefits of fairness. We theoretically and empirically demonstrate that decentralized CGD (i) provides a stronger differential privacy (DP) protection; (ii) is robust against the state-of-the-art poisoning privacy attacks; (iii) results in bounded fairness guarantee among participants; and (iv) provides high test accuracy (comparable with centralized learning) with a bounded convergence rate over four real-world datasets.
Track: Systems and Infrastructure for Web, Mobile, and WoT
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: No
Submission Number: 1348
Loading