Abstract: Differential privacy (DP) is a prominent method to protect information about individuals during data analysis. Training neural networks with differentially private stochastic gradient descent (DPSGD) influences the model's learning dynamics and, consequently, its outputs. This can affect the model's performance and fairness. While the majority of studies on the topic report a negative impact on fairness, it has recently been suggested that fairness levels comparable to non-private models can be achieved by optimizing hyperparameters for performance. In this work, we cast further light on the distinctions between various performance and fairness metrics and clarify that disparate impacts on different metrics do not necessarily co-occur. Moreover, we analyze the disparate impact of DPSGD over a wide range of hyperparameter settings, providing new insights for training private and fair neural networks. Finally, we extend our analyses to DPSGD-Global-Adapt, a variant of DPSGD designed to mitigate the disparate impact on accuracy, and conclude that this alternative is not a robust solution with respect to hyperparameter choice.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Abhradeep_Guha_Thakurta1
Submission Number: 3777
Loading