Differential Privacy in Adversarial Learning with Provable RobustnessDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: differential privacy, adversarial learning, robustness bound, adversarial example
TL;DR: Preserving Differential Privacy in Adversarial Learning with Provable Robustness to Adversarial Examples
Abstract: In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. An end-to-end theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.
Original Pdf: pdf
10 Replies

Loading