On the Impact of Hyper-Parameters on the Privacy of Deep Neural Networks

ICLR 2026 Conference Submission16975 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deep learning, meta learning, privacy, hyper-parameter optimization, unintended feature leakage
Abstract: The deployment of deep neural networks (DNNs) in many real-world applications leads to the processing of huge amounts of potentially sensitive data. This raises important new concerns, in particular with regards to the privacy of individuals whose data is used by these DNNs. In this work, we focus on DNNs trained to identify biometric markers from images, e.g., gender classification, which have been shown to leak unrelated private attributes at inference time, e.g., ethnicity, also referred to as unintentional feature leakage. Specifically, we observe that the hyper-parameters of such DNNs significantly impact the leakage of these attributes unrelated to the main task. To address this, we develop a hyper-parameter optimization (HPO) strategy with the goal of training DNNs that mitigate unintended feature leakage while retaining a good main task accuracy. Specifically, we follow a multi-fidelity and multi-objective HPO approach to (i) conduct the first study of the impact of hyper-parameters on the risk of unintended feature leakage (privacy risk); (ii) demonstrate that, for a specific main task, HPO successfully identifies hyper-parameter configurations that considerably reduce the privacy risk at a very low impact on utility; and (iii) evidence that there exist hyper-parameter configurations that have a significant impact on the privacy risk, regardless of the choice of main and private tasks, i.e., hyper-parameters that generally better preserve privacy.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 16975
Loading