Privacy-Preserving Mechanisms Enable Cheap Verifiable Inference of LLMs

ICLR 2026 Conference Submission21427 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: privacy, verifiability, trust, smpc, zk
TL;DR: We present three protocols that utilize privacy-preserving inference methods to obtain verified inference in LLMs.
Abstract: As large language models (LLMs) continue to grow in size, fewer users are able to host and run models locally. This has led to increased use of third-party hosting services. However, in this setting, there is a lack of guarantees on the computation performed by the inference provider. For example, a dishonest provider may replace an expensive large model with a cheaper-to-run weaker model and return the results from the weaker model to the user. Existing tools to verify inference typically rely on methods from cryptography such as zero-knowledge proofs (ZKPs), but these add significant computational overhead, and remain infeasible for use for large models. In this work, we develop a new insight -- that given a method for performing \emph{private} LLM inference, one can obtain forms of \emph{verified} inference at marginal extra cost. Specifically, we propose three new protocols, each of which leverage privacy-preserving LLM inference in order to provide different guarantees over the inference that was carried out. Our approaches are cheap, requiring the addition of a few extra tokens of computation, and have little to no downstream impact. As the fastest privacy-preserving inference methods are typically faster than ZK methods, the proposed protocols also improve verification runtime. Our work provides novel insights into the connections between privacy and verifiability in LLM inference.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21427
Loading