Comparative Analysis of Machine Learning Models for Climate Change Sentiment Classification: A Multi-Method Evaluation Study
Keywords: Sentiment Analysis, Climate Change, Machine Learning Comparison, Social Media Analysis, Natural Language Processing.
Abstract: This study presents a comprehensive 12-year longitudinal analysis of climate change sentiment classification, comparing four machine learning approaches across 517,468 comments from RC Groups forum (2011-2024). We evaluate two rule-based methods (VADER, TextBlob) and two deep learning methods (BERT, ClimateBERT) on climate discussions centered around fossil fuel debates. Contrary to expectations, rule-based methods significantly outperformed transformer models, with TextBlob achieving 43.36% accuracy versus BERT's 33% and ClimateBERT's 29%. Transformer models exhibited strong negative bias, misclassifying 60% of positive/neutral comments due to training on formal corpora rather than informal forum discourse. Temporal analysis revealed sentiment evolution with positive sentiment peaking in 2013 and declining thereafter. Sarcasm detection emerged as the primary challenge, affecting 35% of misclassifications across all models. These findings demonstrate that climate sentiment analysis benefits from explicit sentiment indicators rather than complex contextual understanding, with important implications for climate communication research and policy assessment applications.
Submission Number: 180
Loading