Abstract: LLMs in social media research offer a double-edged sword: they generate human-like behavior, advancing the study of social dynamics, but also escalate risks like information manipulation, disinformation and misinformation. While previous work has simulated agents through prompt engineering or fine-tuning on human-annotated data, it often overlooked the potential of learning through social media, where diverse human data are available. Meanwhile, bot detection has typically relied on static datasets, missing the evolving nature of LLM-based bots. This paper introduces a novel adversarial learning framework that addresses both challenges, with the co-evolution of the **Evo**lving **Bot** (EvoBot) and **Detector**. EvoBot generates its own training data from previous iterations and refines its behavior based on feedback from the Detector, which is trained to distinguish between human and bot. Experimental results demonstrate that EvoBot improves its ability to bypass detection while effectively simulating real-world social dynamics, such as group opinions and information spread. Additionally, the iterative training process enhances the Detector's performance and generalization, showcasing the framework's effectiveness in generating human-like content and evolving bot detection. The code is available at https://anonymous.4open.science/r/Anonymous_EvoBot-5442.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: quantitative analyses of news and/or social media;
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 6630
Loading