Bayesian Inference-Aided Large Language Model Agents in Infinitely Repeated Games: A Dynamic Network View
Abstract: The rapid expansion of large language models (LLMs) has led to increasingly frequent interactions between LLM agents and human users, motivating new questions about their capacity to form and maintain cooperative relationships. To this end, game theory, as an effective tool in the study of strategic interactions, has gathered attention and has been employed in the research field of LLMs, particularly in exploring their interactions with users. However, most previous studies focused on the performance of LLMs in static games or finitely repeated games, and these studies are relatively stylized and cannot fully capture the complex, evolving nature of User–LLM interactions. In this paper, we modeled User–LLM interactions as a dynamic network of repeated strategic exchanges and proposed an infinitely repeated game framework to analyze the behavioral traits of LLMs in such settings. To enable adaptive decision-making under uncertainty, we further incorporated Bayesian inference using a beta distribution as both the prior and posterior. We conducted a case study over the trending and state-of-the-art LLMs: GPT-3, GPT-4, DeepSeek-V3, Qwen2.5-72B, Qwen2.5-7B and Llama-3-70B. Experimental results demonstrate that LLMs show decent performance in infinitely repeated games, indicating their capability in decision-making and cooperation during repeated interactions within dynamic networks. The integration of Bayesian inference further reveals that LLMs can effectively process probabilistic information, leading to improved performance. Our findings suggest that LLM agents prefer to consider future payoffs rather than only caring about single-stage rewards, as well as the ability to build and maintain long-term cooperative relationships with users in dynamic network settings.
External IDs:dblp:journals/tnse/PanCSWWHH26
Loading