Abstract: Pre-trained Language Models (PLMs) have achieved remarkable performance in various Natural Language Processing (NLP) tasks, including Aspect-based Sentiment Analysis (ABSA). Therefore, numerous ABSA models based on PLMs have been proposed, primarily focusing on module design to exploit the inherent connections between aspects and contexts. However, the core factor driving performance improvements, the PLM’s powerful semantic understanding capabilities, has not been fully considered, raising the question of how to further unlock their potential for downstream tasks. To this end, we introduce a novel training strategy, called CCL1, which integrates the strengths of Curriculum Learning (CurL) and Contrastive Learning (ConL) to facilitate the learning of robust feature representations. For the ABSA task, we use aspect similarities to develop the CurL strategy, grouping samples with similar aspects into batches. This allows ConL to learn more robust representations by providing related samples within each batch. The superiority of CCL is demonstrated through extensive experiments on two public ABSA datasets, with ablation studies validating the effectiveness of combining CurL and ConL in enhancing aspect understanding.
External IDs:dblp:conf/icassp/JianWZYW025
Loading