Abstract: In the field of Natural Language Processing (NLP), Aspect-Based Sentiment Analysis (ABSA) has gained significant attention in recent years due to its ability to perform fine-grained sentiment analysis. Generative methods tackle various ABSA tasks by autoregressively generating the target sequence of sentiment tuples in a specified format. However, the sentiment tuple is intrinsically an unordered set, and the method introduces an order bias between the generated sequence and the original target. Therefore, to investigate the impact of sentiment tuples order on model performance, we conduct a pilot experiment, unveiling that the order of tuples significantly influences the learning outcomes of the Seq2Seq model. Thus, we propose a novel tuple-order learning method that prioritizes tuples from simple to complex, facilitated by a discrete evaluation method that assesses the difficulty of each individual tuple. Specifically, we incorporate positional information on tuples and employ an effective strategy to expedite the assessment of individual tuples. The method optimizes the learning process while maintaining the structural integrity of existing generative models. Extensive experiments show that our approach significantly advances the performance on 14 datasets of 5 benchmark tasks. We will release our code at unmapped: uri https://github.com/gongzhenhu/TOL.
Loading