Keywords: Aspect Sentiment Triplet Extraction, In-context Learning, Large Language Models, Demonstration Selection
Abstract: Although large language models (LLMs) have achieved remarkable success in many NLP tasks, their performance on Aspect Sentiment Triplet Extraction (ASTE) remains inferior to fully supervised methods, even with in-context learning (ICL). We attribute this gap to inconsistent extraction behaviors and insufficient alignment with annotation standards. To address this issue, we propose a multi-view similarity retrieval (MVSR) framework for selecting in-context learning demonstrations by jointly considering semantic and syntactic information. This strategy improves structural alignment between demonstrations and target inputs, leading to more consistent and accurate triplet extraction. Experiments on four ASTE benchmarks show that our method consistently outperforms existing ICL baselines. In the 10-shot setting, it improves F1 by 2.27%, 2.03%, and 2.00% on 14RES, 15RES, and 16RES, respectively, and even surpasses several supervised fine-tuning baselines. These results highlight the importance of structural information in demonstration selection for structured prediction with LLMs.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: zero/few-shot extraction, named entity recognition and relation extraction, retrieval-augmented generation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 3106
Loading