Differential Privacy Preservation in Interpretable Feedforward-Designed Convolutional Neural Networks
Abstract: Feedforward-designed convolutional neural network (FF-CNN) is an interpretable network. The parameter training of the model does not require backpropagation (BP) and optimization algorithms (SGD). The entire network is based on the statistical data output by the previous layer, and the parameters of the current layer are obtained through one-pass manner. Since the network complexity under the FF design is lower than the BP algorithm, FF-CNN has better utility than the BP training method in the directions of semi-supervised learning, ensemble learning, and continuous subspace learning. However, the FF-CNN training process or model release will cause the privacy of training data to be leaked. In this paper, we analyze and verify that the attacker can obtain the private information of the original training data after mastering the training parameters of FF-CNN and the partial output responses. Therefore, the privacy protection of training data is imperative. However, due to the particularity of the FF-CNN training method, the existing deep learning privacy protection technology is not applicable. So we proposed an algorithm called differential privacy subspace approximation with adjusted bias (DPSaab) to protect the training data in FF-CNN. According to the different contribution of the model filters to the output response, we design the privacy budget allocation according to the ratio of the eigenvalues and allocate a larger privacy budget to the filter with a large contribution, and vice versa. Extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets show that DPSaab algorithm has better utility than existing privacy protection technologies.
Loading