Abstract: The security of blockchain systems is fundamentally based on the decentralized consensus in which the majority of parties behave honestly, and the content verification process is essential to maintaining the robustness of blockchain systems. However, the phenomenon that a rational verifier may not have the incentive to honestly perform the costly verification, referred to as the Verifier's Dilemma, could incentivize lazy reporting and undermine the fundamental security of blockchain systems, particularly for verification-expensive decentralized AI applications. In this paper, we initiate the research with the development of a Byzantine-robust peer prediction framework towards the design of one-phase Bayesian truthful mechanisms for the decentralized verification games among multiple verifiers, incentivizing all verifiers to perform honest verification without access to the ground truth even in the presence of noisy observations, malicious players and inaccurate priors in the verification process, proposing the compactness criteria that ensures such robustness guarantees. With robust incentive guarantees and budget efficiency, our study provides a framework of incentive design for decentralized verification protocols that enhances the security and robustness of the blockchain, decentralized AI, and potentially other decentralized systems.
External IDs:dblp:journals/corr/abs-2406-01794
Loading