Keywords: deep learning, differential privacy, dynamic privacy budget, layer-wise gradient processing
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
Abstract: In recent years, with the rapid development of neural network technology, the application of deep learning in the field of artificial intelligence has made significant progress and improvement. However, during the training of neural network models, the utilization of datasets is involved, and these datasets may contain sensitive information from users. Attackers might exploit the well-trained models to gain access to this sensitive information, leading to privacy breaches. Considering this risk, some deep learning algorithms incorporate differential privacy technology to safeguard the privacy of the trained model. This protection comes at the cost of certain model performance, achieved by adding controllable random noise. In this paper, we propose a differential privacy deep learning algorithm based on the importance of each layer's gradients, called DP-AdamILG. DP-AdamILG further mitigates the impact of noise addition on model performance. It accomplishes this by combining the dynamic privacy budget allocation strategy with the formation of noise gradients based on the importance of each layer's gradients. And the algorithm's privacy is theoretically proven. Experimental results show that the DP-AdamILG algorithm can reach good performance of the neural network model and show strong robustness.
A Signed Permission To Publish Form In Pdf: pdf
Primary Area: Deep Learning (architectures, deep reinforcement learning, generative models, deep learning theory, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: No
Submission Number: 65
Loading