Exploring User-level Gradient Inversion with a Diffusion Prior

Published: 28 Oct 2023, Last Modified: 21 Nov 2023FL@FM-NeurIPS’23 PosterEveryoneRevisionsBibTeX
Student Author Indication: Yes
Keywords: gradient inversion, user-level privacy, diffusion prior, distributed learning
TL;DR: This work investigates user-level gradient inversion as a new attack surface in distributed learning and explores diffusion prior for efficient reconstruction of representative images
Abstract: We explore user-level gradient inversion as a new attack surface in distributed learning. We first investigate existing attacks on their ability to make inferences about private information info beyond training data reconstruction. Motivated by the low reconstruction quality of existing methods, we propose a novel gradient inversion attack that applies a denoising diffusion model as a strong image prior in order to enhance recovery in the large batch setting. Unlike traditional attacks, which aim to reconstruct individual samples and suffer at large batch and image sizes, our approach instead aims to recover a representative image that captures the sensitive shared semantic information corresponding to the underlying user. Our experiments with face images demonstrate the ability of our methods to recover realistic facial images along with private user attributes.
Submission Number: 37
Loading