Probabilistic Audits for Verifiable Training and Outcome Improvement in Decentralized Learning

ICLR 2026 Conference Submission20510 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Training, Decentralized Training, Verifiable Computation
Abstract: Decentralized training of large models presents two critical verification challenges: ensuring the training process was executed correctly (process verification) and confirming the resulting model genuinely improved (outcome verification). Existing solutions like zkML are prohibitively expensive, while prior Proof-of-Learning schemes focus only on the process, failing to guarantee that the final model is actually better. We introduce a comprehensive and efficient framework that addresses both challenges through economically-secured probabilistic audits. First, we propose a protocol where Provers commit to each training step, with a small, random fraction of steps being audited by verifier committees, and we derive a tight detection-cost frontier that minimizes verification overhead. Second, we introduce Proof-of-Improvement (PoI), a novel and lightweight evaluation audit that statistically certifies milestone-based gains (e.g., perplexity reduction) on a committed dataset. Empirically, on a QLoRA fine-tuning task, our process audits reduce verification compute by over 95\% compared to full replication, and our PoI audits certify model improvements with high statistical power at a minimal cost.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 20510
Loading