Keywords: Differential Privacy, Ethics, privacy budget
TL;DR: Focusing on the values of privacy budget in differential privacy is a mistake, as predicted by Goodhart’s Law.
Abstract: This position paper argues that setting the privacy budget in differential privacy should not be viewed as an important limitation of differential privacy compared to alternative methods for privacy-preserving machine learning. The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy in real-world deployments and is sometimes used to promote alternative mitigation techniques for data protection. We believe this misleads decision-makers into choosing unsafe methods. We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself, but from the intrinsic difficulty of estimating privacy risks in context, a challenge that any rigorous method for privacy risk assessment face. Moreover, we claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework or justify why it cannot.
Lay Summary: Organizations that use data must protect people’s privacy, especially when that data is used to train machine learning models. A popular and mathematically rigorous approach is differential privacy, which limits what can be learned about any individual in a dataset. It does this using a parameter called the privacy budget, which controls how much information can leak.
Some critics argue that the privacy budget is too hard to interpret and therefore makes differential privacy impractical. This paper argues that this criticism is misleading. Understanding how much privacy is at risk is difficult in any real system, regardless of the method used. This is not a flaw of differential privacy, but a reflection of the inherent challenge of estimating privacy leakage.
Differential privacy is useful precisely because it was designed for complex, high-dimensional settings where intuition alone fails. Recent advances in privacy auditing and attacks have also improved guidance on selecting meaningful budgets in practice. The real challenge is to choose realistic assumptions about what information attackers might have and how models are used, rather than rejecting differential privacy or chasing extremely small privacy budgets at all costs.
Submission Number: 404
Loading