DRIFT: Dependency Reasoning with Retrieval-Augmented Instruction Fine-Tuning for Aspect–Sentiment Quadruple Prediction
Keywords: Aspect Sentiment Quadruple Prediction, ASQP, Large Language Model, Sentiment Analysis, Instruction-tuning
Abstract: Aspect Sentiment Quadruple Prediction (ASQP) extracting [aspect, category, opinion, sentiment] tuples remains challenging due to the limited use of syntactic structure and the weakness of exact-match evaluation under span-boundary variation. We present DRIFT, a novel unified framework that combines syntax-aware instruction tuning with retrieval-augmented inference. DRIFT injects linearized dependency trees into the model input to provide explicit syntactic cues during reasoning, and retrieves semantically similar demonstrations to construct informative contexts at inference time. To mitigate evaluation instability caused by minor boundary disagreements, we further introduce an LLM-based adjudicator that compare predicted and reference spans, yielding judgments that better align with human annotation. Experiments on Rest15 and Rest16 show that DRIFT achieves state-of-the-art F1 score. Moreover, our ablation analysis indicates that syntactic guidance contributes most under fine-tuning, whereas retrieval augmentation is especially beneficial in in-context learning (ICL). The adjudicator improves the robustness of evaluation by reducing sensitivity to boundary-level mismatches.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: Sentiment Analysis, LLM, fine-tuning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2856
Loading