Personalized Privacy Amplification via Importance Sampling

TMLR Paper3360 Authors

19 Sept 2024 (modified: 04 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: For scalable machine learning on large data sets, subsampling a representative subset is a common approach for efficient model training. This is often achieved through importance sampling, whereby informative data points are sampled more frequently. In this paper, we examine the privacy properties of importance sampling, focusing on an individualized privacy analysis. We find that, in importance sampling, privacy is well aligned with utility but at odds with sample size. Based on this insight, we propose two approaches for constructing sampling distributions: one that optimizes the privacy-efficiency trade-off; and one based on a utility guarantee in the form of coresets. We evaluate both approaches empirically in terms of privacy, efficiency, and accuracy on the differentially private $k$-means problem. We observe that both approaches yield similar outcomes and consistently outperform uniform sampling across a wide range of data sets.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The changes in this revision address the reviewers' comments. All changes are highlighted in blue. - Clarified the trade-offs between privacy, utility and efficiency (**dK8Z**, **HJwH**) - Clarified the nature of the variance-optimal distribution (**5tAQ**) - Clarified the privacy of Algorithm 1 (**dK8Z**, **HJwH**) - Clarified the experimental set-up (**dK8Z**, **5tAQ**) - Additional reference points for the experimental results (**dK8Z**) - Improved visibility of the plots (**dK8Z**) - Renamed *privacy-optimal* to *privacy-constrained* (**5tAQ**) - Minor error corrections in the proofs (**5tAQ**) We thank the reviewers for their valuable suggestions!
Assigned Action Editor: ~Antti_Koskela1
Submission Number: 3360
Loading