On the Impact of Hyper-Parameters on the Privacy of Deep Neural Networks

TMLR Paper6746 Authors

01 Dec 2025 (modified: 08 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The deployment of deep neural networks (DNNs) in many real-world applications leads to the processing of huge amounts of potentially sensitive data. This raises important new concerns, in particular with regards to the privacy of individuals whose data is used by these DNNs. In this work, we focus on DNNs trained to identify biometric markers from images, e.g., gender classification, which have been shown to leak unrelated private attributes at inference time, e.g., ethnicity, also referred to as unintentional feature leakage. Existing literature has tackled this problem through architecture specific and complex techniques that are hard to put into place in practice. In contrast we focus on a very generalizable aspect of DNNs, the hyper-parameters used to train them, and study how they impact the privacy risk. Specifically, we follow a multi-fidelity and multi-objective HPO approach to (i) conduct the first study of the impact of hyper-parameters on the risk of unintended feature leakage (privacy risk); (ii) demonstrate that, for a specific main task, HPO successfully identifies hyper-parameter configurations that considerably reduce the privacy risk at a very low impact on utility, achieving similar result as state-of-the-art techniques only by changing hyper-parameters; and (iii) evidence that there exist hyper-parameter configurations that have a significant impact on the privacy risk, regardless of the choice of main and private tasks, i.e., hyper-parameters that generally better preserve privacy.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Antti_Honkela1
Submission Number: 6746
Loading