Abstract: We propose a unified optimization framework for designing continuous and discrete noise distributions that ensure differential privacy (DP) by minimizing Rényi DP, a variant of DP, under a cost constraint. Rényi DP has the advantage that by considering different values of the Rényi parameter $\alpha$, we can tailor our optimization for any number of compositions. To solve the optimization problem, we reduce it to a finite-dimensional convex formulation and perform preconditioned gradient descent. The resulting noise distributions are then compared to their Gaussian and Laplace counterparts. Numerical results demonstrate that our optimized distributions are consistently better, with significant improvements in $(\varepsilon, \delta)$-DP guarantees in the moderate composition regimes, compared to Gaussian and Laplace distributions with the same variance.
Lay Summary: Protecting sensitive information is a major concern in the age of big data. Differential Privacy (DP) is a popular method for ensuring privacy by adding random noise to data, making it difficult to identify individuals. However, choosing the right type of noise is critical—too much noise can ruin data accuracy, and too little can fail to protect privacy. In this work, we introduce a new way to find the best noise distribution for a given privacy guarantee. Our method improves the accuracy of results while still meeting strong privacy standards. We show that our optimized noise works better than commonly used noise types, such as Gaussian or Laplace, across different datasets and privacy settings. This approach can help make privacy-preserving machine learning more reliable and effective in real-world applications.
Link To Code: https://github.com/SankarLab/Renyi-DP-Mechanism-Design
Primary Area: Social Aspects->Privacy
Keywords: Differential Privacy, Renyi Differential Privacy, Optimization
Submission Number: 7474
Loading