Membership Inference Attacks and Privacy in Topic Modeling

Published: 18 Sept 2024, Last Modified: 18 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent research shows that large language models are susceptible to privacy attacks that infer aspects of the training data. However, it is unclear if simpler generative models, like topic models, share similar vulnerabilities. In this work, we propose an attack against topic models that can confidently identify members of the training data in Latent Dirichlet Allocation. Our results suggest that the privacy risks associated with generative modeling are not restricted to large neural models. Additionally, to mitigate these vulnerabilities, we explore differentially private (DP) topic modeling. We propose a framework for private topic modeling that incorporates DP vocabulary selection as a pre-processing step, and show that it improves privacy while having limited effects on practical utility.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/nicomanzonelli/topic_model_attacks
Assigned Action Editor: ~Antti_Honkela1
Submission Number: 2505
Loading