Private Gradient Estimation is Useful for Generative Modeling

Published: 20 Jul 2024, Last Modified: 04 Aug 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While generative models have proved successful in many domains, they may pose a privacy leakage risk in practical deployment. To address this issue, differentially private generative model learning has emerged as a solution to train private generative models for different downstream tasks. However, existing private generative modeling approaches face significant challenges in generating high-dimensional data due to the inherent complexity involved in modeling such data. In this work, we present a new private generative modeling approach where samples are generated via Hamiltonian dynamics with gradients of the private dataset estimated by a well-trained network. In the approach, we achieve differential privacy by perturbing the projection vectors in the estimation of gradients with sliced score matching. In addition, we enhance the reconstruction ability of the model by incorporating a residual enhancement module during the score matching. For sampling, we perform Hamiltonian dynamics with gradients estimated by the well-trained network, allowing the sampled data close to the private dataset's manifold step by step. In this way, our model is able to generate data with a resolution of 256$\times$256. Extensive experiments and analysis clearly demonstrate the effectiveness and rationality of the proposed approach.
Primary Subject Area: [Generation] Generative Multimedia
Secondary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: Generative models and multimedia share a significant and evolving relationship. Generative models are a subset of machine learning techniques that learn to create content resembling their training data. This capability has profound implications for the multimedia sector, e.g., images.
Supplementary Material: zip
Submission Number: 1828
Loading