Personalized Privacy Amplification via Importance Sampling

Published: 03 Jan 2025, Last Modified: 03 Jan 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: For scalable machine learning on large data sets, subsampling a representative subset is a common approach for efficient model training. This is often achieved through importance sampling, whereby informative data points are sampled more frequently. In this paper, we examine the privacy properties of importance sampling, focusing on an individualized privacy analysis. We find that, in importance sampling, privacy is well aligned with utility but at odds with sample size. Based on this insight, we propose two approaches for constructing sampling distributions: one that optimizes the privacy-efficiency trade-off; and one based on a utility guarantee in the form of coresets. We evaluate both approaches empirically in terms of privacy, efficiency, and accuracy on the differentially private $k$-means problem. We observe that both approaches yield similar outcomes and consistently outperform uniform sampling across a wide range of data sets. Our code is available on GitHub.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera-ready version.
Code: https://github.com/smair/personalized-privacy-amplification-via-importance-sampling
Assigned Action Editor: ~Antti_Koskela1
Submission Number: 3360
Loading