Abstract: Stance detection, a key task in natural language processing, determines an author's viewpoint based on textual analysis. This study examines the evolution of stance detection methods, transitioning from early machine learning approaches to the groundbreaking BERT model, and eventually to modern Large Language Models (LLMs) such as ChatGPT. While ChatGPT's closed-source nature and associated costs present challenges, the open-source model LLaMa-2 offers an encouraging alternative. We fine-tuned both ChatGPT and LLaMa-2 on two publicly available datasets: SemEval-2016 and P-Stance. Results highlight the efficacy of fine-tuned LLMs in stance detection, with both models surpassing previous benchmarks. LLaMa-2's performance, despite having fewer parameters than ChatGPT, underscores the efficiency of open-source models. This study emphasizes the potential of LLMs in stance detection and calls for more extensive research in this field. To further contribute to the research community, our code for this study will be made publicly available.
Paper Type: short
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies
Loading