DP-RBAdaBound: A differentially private randomized block-coordinate adaptive gradient algorithm for training deep neural networks
Abstract: In order to rapidly train deep learning models, many adaptive gradient methods have been proposed in recent
years, such as Adam and $AMSG_{RAD}$.However, the computation of the full gradient vectors in the above methods
becomes expensive prohibitively at each iteration for handling high dimensional data. Moreover, the private
information may be leaked in the process of training. For these reasons, we propose a differentially private
randomized block-coordinate adaptive gradient algorithm, called DP-RBAdaBound, for training deep learning
model. To reduce the computation of the full gradient vectors, we randomly choose a block-coordinate to
update the model parameter at each iteration. Meanwhile, we add the Laplace noise to the block-coordinate
of the gradient vector per iteration for preserving the privacy of users. Furthermore, we rigorously show that
the proposed algorithm can preserve $\epsilon -differential privacy,$ where $\epsilon > 0$ denotes the privacy level. Moreover,
we also rigorously prove that the square-root regret bound is also achieved in convex settings, i.e., $O(\sqrt{T}) $, where $T$ is a time horizon. Besides, we offer a tradeoff between regret bound and privacy, i.e., the regret bound has order of $O(1 / \epsilon ^4) $ by fixing other parameters when $\epsilon -differential$ privacy is achieved. Finally, we confirm
the computational benefit by training DenseNet-121 and ResNet-34 models on CIFAR-10 dataset, respectively. Meanwhile, the effectiveness of $DP-RBA_{DA}B_{OUND}$ is also validated through training the DenseNet-121 model on
CIFAR-100 dataset and LSTM model on Penn TreeBank dataset, respectively.
0 Replies
Loading