everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Neural networks are vulnerable to privacy attacks aimed at stealing sensitive data. When trained on real-world datasets, these models can also inherit latent biases, which may further increase privacy risks. In this work, we investigate the impact of spurious correlation bias on privacy vulnerability, identifying several key challenges. We introduce spurious privacy leakage, a phenomenon where spurious groups can be up to 100 times more vulnerable to privacy attacks than non-spurious groups, and demonstrate how this leakage is connected to task complexity. Furthermore, while robust training methods can mitigate the performance disparity across groups, they fail to reduce privacy vulnerability, and even differential privacy is ineffective in protecting the most vulnerable spurious group in practice. Finally, we compare model architectures in terms of both performance and privacy, revisiting prior research with novel insights.