Scaling up Trustless DNN Inference with Zero-Knowledge Proofs

Published: 27 Oct 2023, Last Modified: 12 Dec 2023RegML 2023EveryoneRevisionsBibTeX
Keywords: zero knowledge proof, trustless DNN execution
TL;DR: We scale trustless DNN inference to ImageNet using zero-knowledge proofs
Abstract: As ML models have increased in capabilities and accuracy, so has the complexity of their deployments. Increasingly, ML model consumers are turning to service providers to serve the ML models in the ML-as-a-service (MLaaS) paradigm. As MLaaS proliferates, a critical requirement emerges: how can model consumers verify that the correct predictions were served, in the face of malicious, lazy, or buggy service providers? We present the first practical ImageNet-scale method to verify ML model inference non-interactively, i.e., after the inference has been done. To do so, we leverage recent developments in ZK-SNARKs (zero-knowledge succinct non-interactive argument of knowledge), a form of zero-knowledge proofs. ZK-SNARKs allows us to verify ML model execution non-interactively and with only standard cryptographic hardness assumptions. We provide the first ZK-SNARK proof of valid inference for a full-resolution ImageNet model, achieving 79% top-5 accuracy, with verification taking as little as one second. We further use these ZK-SNARKs to design protocols to verify ML model execution in a variety of scenarios, including verifying MLaaS predictions, verifying MLaaS model accuracy, and using ML models for trustless retrieval. Together, our results show that ZK-SNARKs have the promise to make verified ML model inference practical.
Submission Number: 8
Loading