Auditing $f$-differential privacy in one run

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We perform tests to audit whether a privacy mechanism satisfies $f$-differential privacy. We only invoke the privacy mechanism once.
Abstract: Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms. Existing auditing mechanisms, however, are either computationally inefficient -- requiring multiple runs of the machine learning algorithms —- or suboptimal in calculating an empirical privacy. In this work, we present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms. Our approach is efficient; Similar to the recent work of Steinke, Nasr and Jagielski (2023), our auditing procedure leverages the randomness of examples in the input dataset and requires only a single run of the target mechanism. And it is more accurate; we provide a novel analysis that enables us to achieve tight empirical privacy estimates by using the hypothesized $f$-DP curve of the mechanism, which provides a more accurate measure of privacy than the traditional $\epsilon,\delta$ differential privacy parameters. We use our auditing procure and analysis to obtain empirical privacy, demonstrating that our auditing procedure delivers tighter privacy estimates.
Lay Summary: We design tests to check if a machine learning algorithm truly protects privacy as claimed. These tests are similar to known attacks that try to tell whether specific data was used during training, but our method only needs to run the algorithm once—making it more efficient. Compared to earlier approaches, our method catches more privacy failures.
Primary Area: Social Aspects->Privacy
Keywords: Empirical privacy, Auditing, Differential Privacy
Submission Number: 13343
Loading