Keywords: network science, social simulations, LLMs, centrality, hubs
TL;DR: LLM-based simulations of agents on networks
Abstract: This paper examines how networked agents recover ground truth when initial knowledge is incomplete and contaminated by misinformation. We ask: (1) under what conditions can networks reconstruct the correct knowledge base, (2) how does centrality affect recovery across informational environments, and (3) what structural vulnerabilities emerge when misinformation dominates? We develop a multi-agent simulation where agents begin with a mix of true and false facts, exchange information with neighbors, and update beliefs using a redundancy-based scoring rule with contradiction resolution. The system aims to reconstruct the full ground-truth knowledge base. Experiments on the ca-GrQc collaboration network reveal abrupt bandwidth thresholds: below a critical share budget (the maximum number of facts an agent can transmit per round), convergence never occurs; above it, recovery typically completes within 1--3 rounds and yields complete accuracy across agents that converge. Information quality defines three regimes. In clean environments (truth $\geq 70\%$), centrality strongly predicts success. At a boundary near 50\% truth, centrality effects collapse ($\rho \approx 0.2$ or less). In polluted environments (truth $\leq 40\%$), central nodes amplify errors, producing a \emph{hub vulnerability paradox}. These findings identify structural weaknesses in information systems and show that resilience depends jointly on communication capacity, network topology, and information quality.
Supplementary Material: zip
Submission Number: 179
Loading