An Explainable Comparative Analysis of Machine Learning with Rule Based Models in Sentiment Tasks: Implication for Polarization Studies
Keywords: explainable ai; polarization; automated text analysis; nlp; social media analysis
Abstract: This research systematically examines whether post-hoc Explainable AI (XAI) techniques can render transformer-based sentiment models sufficiently interpretable for measuring societal polarization from unstructured text data. Further, we compare explainable transformers with rule-based sentiment lexicons to illustrate which model is better suited to societal polarization data. Our results suggest that, XAI techniques help to explain and interpret the model decision and we also found that, transformer perform better in comparison to rule-based lexicons.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: counterfactual/contrastive explanations; explanation faithfulness; feature attribution; human-subject application-grounded evaluations
Contribution Types: Model analysis & interpretability, Reproduction study, Publicly available software and/or pre-trained models
Languages Studied: python
Submission Number: 6019
Loading