Abstract: Previous Aspect-Based Sentiment Analysis (ABSA) studies have often incorporated syntactic information to connect contextual details with the designated aspect. These methods rely on complex model design to obtain syntactic structure information, further acquiring crucial semantic insights. Considering the potent contextualization abilities of the Large Language Model (LLM), we present the Low-Rank Adaptation plus In-domain Dynamic Examplar (LoRA-IDE) framework. This framework effectively aligns the task and sentence context information with the target aspect, leveraging the power of LLM. Specifically, we employ the LoRA training strategy to enable LLM to learn the context information of ABSA and promote the model's understanding of the connection between sentence context and aspects through the use of curated, designed instructions with IDE. Experimental results demonstrate that our proposed approach not only improves the performance of LLM on ABSA but also outperforms the current state-of-the-art model on two benchmarks at a large scale. The codes will be released upon the acceptance of this paper.
Paper Type: Short
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: Aspect Based Sentiment Analysis, Large Language Model
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 5459
Loading