What can we learn from gradients?Download PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Privacy, Security, Reconstruction Attack, Federated Learning
Abstract: Recent work (Zhu, Liu, and Han 2019) has shown that it is possible to reconstruct the input (image) from the gradient of a neural network. In this paper, our aim is to better understand the limits to reconstruction and to speed up image reconstruction by imposing prior image information and improved initialization. Exploring the theoretical limits of input reconstruction, we show that a fully-connected neural network with a single hidden node is enough to reconstruct a single input image, regardless of the number of nodes in the output layer. Then we generalize this result to a gradient averaged over mini-batches of size B. In this case, the full mini-batch can be reconstructed in a fully-connected network if the number of hidden units exceeds B. For a convolutional neural network, the required number of filters in the first convolutional layer again is decided by the batch size B, however, in this case, input width d and the width after filter $d^{'}$ also play the role $h=(\frac{d}{d^{'}})^2BC$, where C is channel number of input. Finally, we validate and underpin our theoretical analysis on bio-medical data (fMRI, ECG signals, and cell images) and on benchmark data (MNIST, CIFAR100, and face images).
One-sentence Summary: We aim to analyze the limitations of input reconstruction, speedup and stabilize the reconstruction under federated learning setup.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=j96QlPnn1O
5 Replies

Loading