Enhancing Sentiment Analysis for Chinese Texts Using a BERT-Based Model with a Custom Attention Mechanism

Published: 01 Jan 2024, Last Modified: 10 Feb 2025WISA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The rise of social media has made automatic emotion recognition from extensive texts crucial in NLP(Natural language processing). Traditional sentiment analysis focuses on basic positive or negative sentiments, neglecting the broader spectrum of emotional complexity. Our innovative model addresses this by incorporating a pretrained BERT (Bidirectional Encoder Representations from Transformers) language model with an additional custom attention mechanism. This mechanism dynamically adjusts encoder layer weights to distinguish between semantically similar but emotionally distinct expressions, such as “anger” versus “sadness” or “happiness” versus “surprise.” This approach enhances the model’s sensitivity to emotional boundaries, allowing for more accurate identification and classification of complex emotions. Experimental results on two six-emotion datasets demonstrate superior performance in precision, recall, and F1-score compared to traditional models.
Loading