Modeling Commenter Personas in Climate Misinformation Discourse

ACL ARR 2026 January Submission4133 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: climate change, misinformation detection, social media, TikTok, commenter personas, persona induction, speaker-level context, discourse analysis, narrative framing, stance and belief modeling, clustering, low-resource classification, large language models, LLM prompting, susceptibility analysis, engagement dynamics, temporal persistence
Abstract: Prior work on climate misinformation detection typically classifies individual comments or claims using text alone. This framing overlooks a key property of online climate discourse: misinformation is often produced, reinforced, and sustained through stable speaker-level patterns, where the same individuals repeatedly employ characteristic narratives and rhetorical styles across interactions. We address this gap by introducing personas: data-driven groupings of commenters that capture recurring discourse strategies beyond the content of any single comment. Empirically, modeling persona improves misinformation classification performance beyond text-only baselines in low-capacity and resource-constrained settings, increasing accuracy from 0.79 to 0.81 and macro-F1 from 0.72 to 0.74 in a low-capacity MLP classifier (statistically significant, p = 0.021). Beyond detection, we find that personas shape how misinformation is received and propagated. Persona-conditioned language model agents exhibit systematic differences in misinformation acceptance that align with human persona-level susceptibility, and the relationship between misinformation prevalence and content persistence varies by the dominant conversational persona.
Paper Type: Long
Research Area: Computational Social Science, Cultural Analytics, and NLP for Social Good
Research Area Keywords: misinformation detection and analysis, quantitative analyses of news and/or social media, NLP tools for social analysis, evaluation and metrics, statistical testing for evaluation, LLM/AI agents, probing, NLP in resource-constrained settings
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency, Data analysis
Languages Studied: English
Submission Number: 4133
Loading