Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: differential privacy, hypothesis selection, computational constraints, density estimation, distribution learning, private hypothesis selection
TL;DR: We provide a near-linear time algorithm for differentially private hypothesis selection that has polylogarithmic sample complexity and achieves the optimal approximation factor.
Abstract: Estimating the density of a distribution from its samples is a fundamental problem in statistics. \emph{Hypothesis selection} addresses the setting where, in addition to a sample set, we are given $n$ candidate distributions---referred to as \emph{hypotheses}---and the goal is to determine which one best describes the underlying data distribution. This problem is known to be solvable very efficiently, requiring roughly $O(\log n)$ samples and running in $\tilde{O}(n)$ time. The quality of the output is measured via the total variation distance to the unknown distribution, and the approximation factor of the algorithm determines how large this distance is compared to the optimal distance achieved by the best candidate hypothesis. It is known that $\alpha = 3$ is the optimal approximation factor for this problem. We study hypothesis selection under the constraint of \emph{differential privacy}. We propose a differentially private algorithm in the central model that runs in nearly linear time with respect to the number of hypotheses, achieves the optimal approximation factor, and incurs only a modest increase in sample complexity, which remains polylogarithmic in $n$. This resolves an open question posed by [Bun, Kamath, Steinke, Wu, NeurIPS 2019]. Prior to our work, existing upper bounds required quadratic time.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 17814
Loading