GPT, But Backwards: Exactly Inverting Language Model Outputs

Published: 01 Jul 2025, Last Modified: 04 Jul 2025ICML 2025 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning, Adversarial Attacks, Discrete Optimisation, One Hot Encoding, Large Language Models, LLM, Auditing, Model Inversion, Prompt Inversion, Prompt Recovery, Prompt Stealing
TL;DR: We take a (possibly problematic) output from an LLM and optimise the input (using our SOTA algorithm) until we can get the LLM to generate that same output (reproducing the bug).
Abstract: While existing auditing techniques attempt to identify potential unwanted behaviours in large language models (LLMs), we address the complementary forensic problem of reconstructing the exact input that led to an existing LLM output — enabling post-incident analysis and potentially the detection of fake output reports. We formalize exact input reconstruction as a discrete optimisation problem with a unique global minimum and introduce SODA, an efficient gradient-based algorithm that operates on a continuous relaxation of the input search space with periodic restarts and parameter decay. Through comprehensive experiments on LLMs ranging in size from 33M to 3B parameters, we demonstrate that SODA significantly outperforms existing approaches. We succeed in fully recovering 79.5% of shorter out-of-distribution inputs from next-token logits, without a single false positive, but struggle to extract private information from the outputs of longer (15+ token) input sequences. This suggests that standard deployment practices may currently provide adequate protection against malicious use of our method. Our code is available at https://doi.org/10.5281/zenodo.15539879.
Submission Number: 131
Loading