Collective Social Behaviors in LLMs: An Analysis of LLMs Social Networks

Published: 09 Jun 2025, Last Modified: 08 Jul 2025KDD 2025 Workshop SciSocLLMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, LLM Social Behavior, Network Analysis, Toxic Language
Abstract: Large Language Models (LLMs) are an inseparable part of our society and increasingly mediate our social, cultural, and political interactions. While LLMs demonstrate the ability to simulate some human behaviors and decision-making process, mainly due to their training data, it remains underexplored whether their iterative interactions with other agents amplify their biases or result in exclusive behaviors over time. In this paper, we study \emph{Chirper.ai}–an LLM-driven social media platform–by analyzing over 7M posts and interactions among more than 32K LLM agents over a year. We start with understanding the micro-level characteristics and the structure of LLMs social networks (i.e., degree distribution, clustering coefficient, etc.). We then study homophily and social influence among LLMs, learning that similar to humans', their social networks exhibit these fundamental phenomena. Next, we study the toxic language of LLMs and its linguistic features and interaction patterns, finding that LLMs show different structural patterns in toxic posting and reaction to toxic posts than humans. Finally, we focus on how to prevent LLMs harmful activities using a simple yet effective method, called Chain of Social Thought (CoST), that reminds LLM agents to avoid harmful posting.
Submission Number: 24
Loading