Keywords: Differential Privacy, Geometric Median, Stochastic Convex Optimization
TL;DR: We give a nearly-linear time algorithm for privately computing the geometric median, addressing the main open question of Haghifam et al (NeurIPS '24).
Abstract: Estimating the geometric median of a dataset is a robust counterpart to mean estimation, and is a fundamental problem in computational geometry. Recently, [HSU24] gave an $(\epsilon, \delta)$-differentially private algorithm obtaining an $\alpha$-multiplicative approximation to the geometric median objective, $\frac 1 n \sum_{i \in [n]} \|\cdot - \mathbf{x}_i\|$, given a dataset $D$ of $x_i$ for $i \in [n]$. Their algorithm requires $n \gtrsim \sqrt d \cdot \frac 1 {\alpha\epsilon}$ samples, which they prove is information-theoretically optimal. This result is surprising because its error scales with the effective radius of $D$ (i.e., of a ball capturing most points), rather than the worst-case radius. We give an improved algorithm that obtains the same approximation quality, also using $n \gtrsim \sqrt d \cdot \frac 1 {\alpha\epsilon}$ samples, but in time $\widetilde{O}(nd + \frac d {\alpha^2})$. Our runtime is nearly-linear, plus the cost of the cheapest non-private first-order method due to [CLMPS16]. To achieve our results, we use subsampling and geometric aggregation tools inspired by FriendlyCore [TCKMS22] to speed up the "warm start" component of the [HSU24] algorithm, combined with a careful custom analysis of DP-SGD's sensitivity for the geometric median objective.
Supplementary Material:  zip
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 10827
Loading