Pruning Self-Attention With Local and Syntactic Dependencies for Aspect Sentiment Triplet Extraction

Li Yuan, Jin Wang, Liang-Chih Yu, Yi Cai, Xuejie Zhang

Published: 01 Jan 2025, Last Modified: 15 Mar 2026IEEE Transactions on Audio, Speech and Language ProcessingEveryoneRevisionsCC BY-SA 4.0
Abstract: Aspect-based sentiment triple extraction (ASTE) is a demanding and emerging subtask of aspect-based sentiment analysis (ABSA). The primary objective of ASTE is to extract aspect and opinion terms and their corresponding sentiment polarities from a sentence. Existing methods stack additional graph convolutional networks (GCNs) on pretrained language models (PLMs) by propagating the representation according to the dependency relationship. Nevertheless, PLMs are pretrained on general language understanding tasks, introducing unnecessary relations in full connections in the Transformer encoder and disregarding both the local n-gram and syntactic dependency information. Moreover, the graph-based layers introduce additional computational loading when model convergence suffers from incorrect initialization. This study presents a lightweight PLM incorporating local n-gram and syntactic information for ASTE. Instead of stacking extra graph-based layers, we prune self-attention with local and syntactic dependency on the Transformer of the upper PLM layer to avoid inappropriate propagation between unrelated word pairs. A refining strategy is introduced to increase the final prediction distribution from the contextual prediction distribution. The experimental results on four benchmarks show that the proposed model outperforms the previously proposed methods.
Loading