On the Performance of Differentially Private Optimization with Heavy-Tail Class Imbalance

Published: 09 Jun 2025, Last Modified: 09 Jun 2025HiLD at ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: differential privacy, optimization, class imbalance
Abstract: In this work, we analyze the optimization behaviour of common private learning optimization algorithms under heavy-tail class imbalanced distribution. We show that, in a stylized model, optimizing with Gradient Descent with differential privacy (DP-GD) suffers when learning low-frequency classes, whereas optimization algorithms that estimate second-order information do not. In particular, DP-AdamBC that removes the DP bias from estimating loss curvature is a crucial component to avoid the ill-condition caused by heavy-tail class imbalance, and empirically fits the data better with $\approx8$% and $\approx5$% increase in training accuracy when learning the least frequent classes on both controlled experiments and real data respectively.
Student Paper: Yes
Submission Number: 68
Loading