Abstract: The Spiral of Silence (SoS) theory posits that, in human societies, fear of social isolation drives individuals holding a minority opinion to quieten down, allowing the majority opinion to dominate public discourse. When agents are large language models (LLMs) rather than humans, the classic affective explanation no longer applies because language models do not have emotions or social anxiety. Therefore, a fundamental question appears: Can purely statistical language generation mechanisms give rise to SOS dynamics in collectives of LLM agents?
We introduce an evaluation framework based on rating sequences and design four controlled experimental conditions by varying the presence of persona configurations and historical interaction signals. To measure opinion dynamics, we employ concentration metrics, including Interquartile Range and Kurtosis, along with trend analysis methods such as the Mann-Kendall test and Spearman rank correlation coefficient. We experiment on six widely used open source models: DeepSeek-V2-Lite-Chat, Llama-3.1-8B-Instruct, Mistral-8B-Instruct-2410, and Qwen-2.5-Instruct series(1.5 B, 3 B, 7 B), covering cross-family comparisons on a similar scale and within-family scaling analyses for Qwen, and a close source model GPT-4o-mini. The results of the experiment indicate that \text{(i)} most of the models show a strong default bias in the absence of social signals; \text{(ii)} persona introduces opinion heterogeneity, while history exerts an anchoring force; and \text{(iii)} combining both signals self-reinforcing the majority opinion dominance appears much more frequent in the test cases than others, despite the lack of affect of the agents.
These findings challenge traditional affect-based explanations of SoS and provide empirical evidence to understand and mitigate opinion convergence in LLM-based agent systems and offer a conceptual link between computational sociology and the design of responsible artificial intelligence systems.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: LLM/AI agents, Spiral of Silence, Human-Centered Evaluation, Data Influence and Memorization, Natural Language Explanations, Human-Subject Application-Grounded Evaluations
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 3830
Loading