Incentivizing Truthfulness in Fully Decentralized Learning with Guaranteed Accurate Convergence

05 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Decentralized learning, Truthfulness
Abstract: Decentralized learning has gained significant attention due to its advantages in scalability, privacy, and fault tolerance. In this paradigm, multiple agents collaboratively train a global model by exchanging parameters only with their neighbors, without the assistance of a centralized server. However, a key vulnerability of existing decentralized learning approaches is their implicit assumption that all agents behave honestly during gradient updates and information sharing. In real-world scenarios, this assumption often breaks down, as selfish or strategic agents may be incentivized to manipulate gradients or share false information for personal gain, ultimately compromising the final learning outcome. In this work, we propose a fully decentralized payment mechanism that, for the first time, guarantees both truthful behaviors and accurate convergence in decentralized stochastic gradient descent algorithms. This represents a significant advancement, as it addresses two major limitations of existing truthfulness mechanisms for collaborative learning: 1) reliance on a centralized server for payment collection, and 2) the tradeoff between ensuring truthfulness and maintaining convergence accuracy. In addition to characterizing the convergence rate under convex or strongly convex conditions, we also prove that our approach guarantees the cumulative gain that an agent can obtain through strategic behavior remains finite, even as the number of iterations approaches infinity—a property unattainable by most existing truthfulness mechanisms. Experimental results on several machine learning applications confirm the effectiveness of our approach.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 2211
Loading