Persuasion at Play: Understanding Misinformation Dynamics in Demographic-Aware Human-LLM Interactions
Abstract: Existing challenges in misinformation exposure and susceptibility vary across demographics, as some populations are more vulnerable to misinformation than others. Large language models (LLMs) introduce new dimensions to these challenges through their ability to generate persuasive content at scale and reinforcing existing biases. This study investigates the bidirectional persuasion dynamics between LLMs and humans when exposed to misinformative content. We analyze human-to-LLM influence using human-stance datasets and assess LLM-to-human influence by generating LLM-based persuasive arguments. Additionally, we use a multi-agent LLM framework to analyze the spread of misinformation under persuasion among demographic-oriented LLM agents. Our findings show that demographic factors influence LLM susceptibility, with up to 15 percentage point differences in correctness across groups. Multi-agent LLMs also exhibit echo chamber behavior, aligning with human-like group polarization. Therefore, this work highlights demographic divides in misinformation dynamics and offers insights for future interventions.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLM/AI agents, prompting, safety and alignment, human-AI interaction, human factors in NLP, human-centered evaluation
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 4878
Loading