Text Embeddings Reveal (Almost) As Much As TextDownload PDF

20 Jul 2023 (modified: 14 Nov 2023)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: text retrieval, embeddings, inversion, privacy
TL;DR: We propose Vec2Text, a method that can recover 90% of 32-token embedded inputs exactly
Abstract: How much private information do text embeddings reveal about the original text? We investigate the problem of embedding \textit{inversion}, reconstructing the full text represented in dense text embeddings. We frame the problem as controlled generation: generating text that, when reembedded, is close to a fixed point in latent space. We find that although a naive model conditioned on the embedding performs poorly, a multi-step method that iteratively corrects and re-embeds text is able to recover 92% of 32-token text inputs exactly. We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes.
0 Replies

Loading