Attribute-Based Injection Transformer for Personalized Sentiment Analysis

Published: 01 Jan 2024, Last Modified: 13 Nov 2024IEEE Trans. Emerg. Top. Comput. Intell. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Personal attributes have been proven to be useful for sentiment analysis. However, previous models of learning attribute-specific language representations are suboptimal because only context- or content-wise injection is adopted. This study proposes a transformer structure with a combination of both context- and content-wise injections based on a well pretrained transformer encoder. For context-wise injection, self-interactive attention is implemented by incorporating personal attributes into a multi-head attention. For the content-wise perspective, an attribute-based layer normalization is used to align text representation with personal attributes. In particular, the proposed transformer layer can be a universal layer compatible with the original Google Transformer layer. Instead of training from scratch, the proposed Transformer layer can be initialized from a well pre-trained checkpoint for downstream tasks. Extensive experiments were conducted on three benchmarks of document-level sentiment analysis, including IMDB, Yelp-2013 and Yelp-2014. The results show that the proposed method outperforms the previous methods for personalized sentiment analysis, demonstrating that the combination of both context- and content-wise injections can facilitate model learning for attribute-specific language representations.
Loading