Private Text Generation by Seeding Large Language Model Prompts

Published: 12 Oct 2024, Last Modified: 11 Nov 2024GenAI4Health PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: differential privacy, private text, synthetic text, synthetic medical record, synthetic data, private data, large language model
TL;DR: We generate differentially private synthetic medical records by seeding LLM prompts with privately sampled keyphrases from real medical records
Abstract: We explore how private synthetic text can be generated by suitably prompting a large language model (LLM). This addresses a challenge for organizations like hospitals, which hold sensitive text data like patient medical records, and wish to share it in order to train machine learning models for medical tasks, while preserving patient privacy. Methods that rely on training or finetuning a model may be out of reach, either due to API limits of third-party LLMs, or due to ethical and legal prohibitions on sharing the private data with the LLM itself. We propose Differentially Private Keyphrase Prompt Seeding (DP-KPS), a method that generates a private synthetic text corpus from a sensitive input corpus, by accessing an LLM only through privatized prompts. It is based on seeding the prompts with private samples from a distribution over phrase embeddings, thus capturing the input corpus while achieving requisite output diversity and maintaining differential privacy. We evaluate DP-KPS on downstream ML text classification tasks, and show that the corpora it generates preserve much of the predictive power of the original ones. Our findings offer hope that institutions can reap ML insights by privately sharing data with simple prompts and little compute.
Submission Number: 5
Loading