Adaptive Clipping Bound of Deep Learning with Differential Privacy

Published: 01 Jan 2021, Last Modified: 11 Apr 2025TrustCom 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning has been extensively applied in many fields, such as image segmentation, voice recognition, automatic language translation. However, many malicious attackers attempt to attack the model which was trained to accomplish a deep learning assignment via various schemes. Recently, differential privacy technology has been proposed to defend against such attacks via sacrificing the accuracy of model. Therefore, many optimization methods have been proposed to reduce the overall privacy cost, and aim to seek a tradeoff between privacy and utility. In this paper, we propose an approach based on the cluster technology to get a tighter clipping bound for differential privacy deep learning model. In addition, we quantify the clipping bound with an objective function of standard deviation and prove our scheme in an analytically way. A large number of experiments setting on real-datasets demonstrate that our adaptive clipping bound method is better than the previous method which sets the clipping bound constantly.
Loading