Simulating Social Media with LLM-Powered Agents: Demography, Psychography, and Disinformation Dynamics

Published: 08 Apr 2026, Last Modified: 08 Apr 2026MABS 2026EveryoneRevisionsCC BY 4.0
Keywords: Social Networks, Social Bots, Agentic AI, Disinformation
Abstract: As Large Language Model (LLM)-powered agents increasingly populate Online Social Networks, the threat of algorithmic social actors has evolved from automation to sophisticated cognitive manipulation. In order to investigate the resilience of digital discourse against these actors, we conducted a large-scale social simulation involving 500 LLM-driven agents over a 30-day period. Within this population, we partition agents into skeptical users and a 20% minority of disinformers, instructed to spread false narratives and provoke discursive conflict. Our experimental results, derived from over 400,000 interaction events, reveal a critical shift in adversarial dynamics. While skeptical agents demonstrate robust cognitive filtering, "ratioing" disinformative claims with reply rates nearly sixty times higher than their repost rates, they are systematically defeated by "conversational exhaustion". We observe a stark asymmetry in discursive persistence: skeptical agents abandon confrontational threads at more than double the rate (71.6%) of disinformers. These results suggest that, in addition to narrative persuasion, structural fatigue is among the most challenging vulnerabilities of digital ecosystems, where antagonistic persistence prevails over attempts to resist disinformation.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 18
Loading