Practical Adversarial Training with Differential Privacy for Deep LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: adversarial robustness, differential privacy, adversarial training, calibration, deep learning
Abstract: Deep learning models are often vulnerable to privacy risks and adversarial attacks, rendering them un-trustworthy on crowd-sourced tasks. However, these risks are rarely resolved jointly, despite the fact that there are separate solutions in the security community and the privacy community. In this work, we propose the practical adversarial training with differential privacy (DP-Adv), to combine the backbones from both communities and deliver robust and private models with high accuracy. Our algorithm is significantly more concise in the design, compared to previous arts, and is capable of incorporating technical advances from both communities. To be specific, DP-Adv can work with all existing DP optimizers and attacking methods off-the-shelf. In particular, DP-Adv is as private as non-robust DP training, and as efficient as non-DP adversarial training. Our experiments on multiple image datasets show that DP-Adv outperforms state-of-the-art methods that preserve robustness and privacy. Furthermore, we observe that adversarial training and DP can notably worsen the calibration, but the mis-calibration can be mitigated by pre-training.
One-sentence Summary: We propose an efficient and accurate training method, DP-Adv, to preserve both differential privacy and adversarial robustness in deep learning.
Supplementary Material: zip
5 Replies

Loading