TL;DR: We propose the PclGPT framework based on instruction tuning to detect Patronizing and Condescending Language(PCL) and related implicit emotions.
Abstract: Patronizing and Condescending Language (PCL) is a form of harmful communication directed at vulnerable communities. This type of rhetoric exacerbates conflicts and confrontations among Internet communities and detrimentally impacts relatively marginalized communities. Traditional pre-trained models exhibit poor detection performance due to the implicit emotional characteristics in the PCL domain, such as hypocrisy and false sympathy. With the rapid development of the Large Language Model (LLM), there is a growing opportunity to utilize their extensive emotional semantic features for tasks related to sentiment analysis. In this paper, we introduce a comprehensive instruction-tuning framework PclGPT, a new benchmark LLM designed explicitly for patronizing and condescending language. We designed the instruction dataset PCL-SFT and built PclGPT-EN/CN by supervised fine-tuning to facilitate cross-language emotion detection. The findings demonstrate that our framework and model surpass all advanced pre-trained models in classification tasks, including widely employed LLM models like GPT-3.5 and GPT-4. Simultaneously, we confirmed PclGPT's substantial capability to detect implicit emotions through fine-grained emotion analysis and fuzzy sample experiments. Our model establishes a crucial basis for further research in PCL and other implicit sentiment analyses.
Paper Type: long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English,Chinese
0 Replies
Loading