Keywords: Large Language Models, Multi-Agent Simulation, Social Ties, Behavioral Rewards, In-Context Learning
Abstract: Can large language model (LLM) agents reproduce the complex social dynamics that characterize human online behavior—shaped by homophily, reciprocity, and social validation—and what memory and learning mechanisms enable such dynamics to emerge? We present a multi-agent LLM simulation framework in which agents repeatedly interact, evaluate one another, and adapt their behavior through in-context learning accelerated by a coaching signal. To model human social behavior, we design behavioral reward functions that capture core drivers of online engagement, including social interaction, information seeking, self-presentation, coordination, and emotional support. These rewards align agent objectives with empirically observed user motivations, enabling the study of how network structures and group formations emerge from individual decision-making. Our experiments show that coached LLM agents develop stable interaction patterns and form emergent social ties, yielding network structures that mirror properties of real online communities. By combining behavioral rewards with in-context adaptation, our framework establishes a principled testbed for investigating collective dynamics in LLM populations and reveals how artificial agents may approximate or diverge from human-like social behavior.
Archival Option: The authors of this submission do *not* want it to appear in the archival proceedings.
Submission Number: 122
Loading