Competing LLM Agents in a Non-Cooperative Game of Opinion Polarisation

ACL ARR 2025 May Submission6582 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce a non-cooperative game that incorporates elements of social psychology, including confirmation bias, resource constraints, and influence penalties, to analyse the factors that shape the formation and resistance of opinions. A key aspect of our simulation game is the introduction of penalties for Large Language Models (LLM) agents when generating messages to spread or counter misinformation, thus integrating resource optimisation into the decision-making process. Our study reveals that a higher confirmation bias leads to stronger opinion alignment but also amplify polarisation. Lower confirmation bias result in fragmented opinions with limited shifts. Investing in high-resource debunking strategy significantly boosts the population alignment with the debunking agent early in the game, but at the cost of rapid resource-depletion, limiting long-term influence.
Paper Type: Short
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: misinformation detection and analysis, NLP tools for social analysis
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 6582
Loading