Keywords: zero-knowledge, lora, inference, privacy
TL;DR: LoRA inference with zero-knowledge correctness proof
Abstract: Low-Rank Adaptation (LoRA) is a widely
adopted method for customizing large-scale language
models. In distributed, untrusted training
environments, an open source base model user
may want to use LoRA weights created by an external
contributor, leading to two requirements:
(1) the base model user must confirm that the
LoRA weights are effective when paired with the
intended base model, and (2) the LoRA contributor
must keep their proprietary weights private
until certain conditions have been met that allow
the LoRA contributor to release the weights.
We present ZKLoRA, a zero-knowledge verification
protocol that relies on succinct proofs and our
novel Multi-Party Inference procedure to verify
LoRA–base model compatibility without exposing
LoRA weights. ZKLoRA produces deterministic
correctness guarantees and validates each
LoRA module in only 1–2 seconds on state-ofthe-
art large language models. This low-latency
approach enables nearly real-time verification and
promotes secure collaboration among geographically
decentralized teams and contract-based training
pipelines. The protocol ensures that the delivered
LoRA module works as claimed, safeguarding
the contributor’s intellectual property while
providing the base model user with verification of
compatibility and lineage.
Submission Number: 6
Loading