TL;DR: We study how to coordinate private releases for multiple privacy parameter values with no loss in privacy or utility compared to single releases
Abstract: Koufogiannis et al. (2016) showed a $\textit{gradual release}$ result for Laplace noise-based differentially private mechanisms: given an $\varepsilon$-DP release, a new release with privacy parameter $\varepsilon' > \varepsilon$ can be computed such that the combined privacy loss of both releases is at most $\varepsilon'$ and the distribution of the latter is the same as a single release with parameter $\varepsilon'$.
They also showed gradual release techniques for Gaussian noise, later also explored by Whitehouse et al. (2022).
In this paper, we consider a more general $\textit{multiple release}$ setting in which analysts hold private releases with different privacy parameters corresponding to different access/trust levels.
These releases are determined one by one, with privacy parameters in arbitrary order.
A multiple release is $\textit{lossless}$ if having access to a subset $S$ of the releases has the same privacy guarantee as the least private release in $S$, and each release has the same distribution as a single release with the same privacy parameter.
Our main result is that lossless multiple release is possible for a large class of additive noise mechanisms.
For the Gaussian mechanism we give a simple method for lossless multiple release with a short, self-contained analysis that does not require knowledge of the mathematics of Brownian motion.
We also present lossless multiple release for the Laplace and Poisson mechanisms.
Finally, we consider how to efficiently do gradual release of sparse histograms, and present a mechanism with running time independent of the number of dimensions.
Lay Summary: Organizations often want to share insights from sensitive data, like health records or user behavior, without revealing too much about individuals. Differential privacy offers a way to do this by adding noise, creating a balance between accuracy and privacy. But what happens when different people or groups need different levels of access — for example, internal staff, external consultants, or the public?
Today, releasing multiple versions of the same data often increases privacy risk. Our research introduces a method to make multiple data releases with different privacy levels that are “lossless” — meaning they don’t leak more information when combined than the least private one would alone.
This technique works for common types of noise used in privacy protection, including Gaussian and Laplace. It also supports releasing information in any order of privacy level — crucial in real-world scenarios like evolving trust or data markets.
We show how to apply this to complex cases like high-dimensional statistics and sparse histograms, all while keeping computations efficient. The result: organizations can offer flexible data access without sacrificing privacy or accuracy.
Primary Area: Theory
Keywords: algorithms, differential privacy
Submission Number: 11208
Loading