On the convergence of distributed projected gradient play with heterogeneous learning rates in monotone games

Published: 01 Jan 2023, Last Modified: 06 Feb 2025Syst. Control. Lett. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we consider distributed game-theoretic learning problems in which a number of players are to seek the Nash equilibrium through merely local information sharing during a repeated game process. In particular, we are interested in scenarios where each player uses uncoordinated (heterogeneous) instead of identical learning rates for local action updating. It is found that both the maximum and the heterogeneity of players’ learning rates play a role in determining the convergence of the distributed projected gradient play. To this end, we establish explicit conditions on the learning rates based on the contraction mapping theorem to guarantee geometric convergence of both the consensus-based and the augmented game based distributed projected gradient play. Furthermore, to relax these conditions, several variants of the distributed projected gradient play are proposed by adopting different strategies of information sharing in networks. A numerical example is provided to support the theoretic development.
Loading