Abstract: Aspect-based sentiment triplet extraction (ASTE) aims to extract triplets consisting of aspect terms and their associated opinion terms and sentiment polarities from sentences, a relatively new and challenging subtask of aspect-based sentiment analysis (ABSA). Previous studies have used either pipeline models or unified tagging schema models. These models ignore the syntactic relationships between the aspect and its corresponding opinion words, which leads them to mistakenly focus on syntactically unrelated words. One feasible option is to use a graph convolution network (GCN) to exploit syntactic information by propagating the representation from the opinion words to the aspect. However, such a method considers all syntactic dependencies to be of the same type and thus may still incorrectly associate unrelated words to the target aspect through the iterations of graph convolutional propagation. Herein, a syntax-aware transformer (SA-Transformer) is proposed to extend the GCN strategy by fully exploiting the dependency types of edges to block inappropriate propagation. The proposed approach can obtain different representations and weights even for edges with the same dependency type according to their adjacent dependency type of edges. Instead of using a GCN layer, we used an L-layer SA transformer to encode syntactic information in the word-pair representation to improve performance. Experimental results on four benchmark datasets show that the proposed model outperforms various previous models for ASTE.
External IDs:doi:10.1109/taffc.2023.3291730
Loading