Large Language Models are Bad Game Theoretic Reasoners: Evaluating Performance and Bias in Two-Player Non-Zero-Sum Games

Published: 18 Jun 2024, Last Modified: 26 Jul 2024ICML 2024 Workshop on LLMs and Cognition PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Natural Language Processing, Large Language Models, Game Theory, Biases
TL;DR: Large Language Models are unreliable for solving game theory tasks due to significant systematic biases that greatly impact their performance in these tasks.
Abstract: Large Language Models (LLMs) have been increasingly used in real-world settings, yet their strategic abilities remain largely unexplored. Game theory provides a good framework for assessing the decision-making abilities of LLMs in interactions with other agents. Although prior studies have shown that LLMs can solve these tasks with carefully curated prompts, they fail when the problem setting or prompt changes. In this work we investigate LLMs' behaviour in strategic games, Stag Hunt and Prisoner Dilemma, analyzing performance variations under different settings and prompts. We observed that the LLMs' performance drops when the game configuration is misaligned with the affecting biases. Performance is assessed based on selecting the correct action, which agrees with both players' prompted preferred behaviours. Alignment refers to whether the LLM's bias aligns with the correct action. We found that GPT-3.5, GPT-4-Turbo, and Llama-3-8B show an average performance drop when misaligned of 32\%, 25\%, and 29\%, respectively in Stag Hunt, and 28\%, 16\%, and 24\% respectively in Prisoners Dilemma. Our results show that the reason for this is that tested state-of-the-art LLMs are significantly affected by at least one of the following systematic biases: (1) positional bias, (2) payoff bias, or (3) behavioural bias.
Submission Number: 36
Loading