Evaluating LLMs for Detecting Climate Misinformation: How Aligned are LLMs with Expert Classification of False or Misleading Claims about Climate Change on Social Media?

ACL ARR 2024 June Submission1651 Authors

14 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Online dis/misinformation is a social problem that has the potential to be substantially aggravated by the development of Large Language Models (LLMs). In this study we evaluate the potential for LLMs to be part of the solution for mitigating online dis/misinformation rather than the problem. Employing a public expert annotated dataset and a curated sample of social media content we evaluate the performance of GPT-4 and fine-tuned GPT-3.5-turbo on climate misinformation classification task, comparing them to existing climate-focused computer-assisted tools and expert assessments. Results show that fine-tuned GPT-3.5-turbo outperforms GPT-4 and a BERT-based model for classifying climate misinformation and functions at the equivalency of climate change experts with over 20 years of experience in climate communication. These findings highlight the potential of fine-tuned LLMs in 1) facilitating civil society organizations with limited technical expertise to engage in a range of governance tasks with respect to climate misinformation, and 2) encourage further exploration of AI-driven solutions for detecting climate dis/misinformation.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: misinformation detection and analysis, quantitative analyses of news and/or social media, NLP tools for social analysis
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
Submission Number: 1651
Loading