Targeted Distillation for Sentiment Analysis

ACL ARR 2025 February Submission4853 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper presents a compact model that achieves strong sentiment analysis capabilities through targeted distillation from large language models (LLMs). Our methodology decouples the distillation target into two components: sentiment-related knowledge and task alignment. We propose a two-stage distillation framework to transfer these components effectively. The first stage, knowledge-driven distillation (\textsc{KnowDist}), transfers sentiment-related knowledge to enhance fundamental sentiment analysis capabilities. The second stage, in-context learning distillation (\textsc{ICLDist}), transfers task-specific prompt-following abilities to optimize task alignment. For evaluation, we introduce \textsc{SentiBench}, a comprehensive sentiment analysis benchmark comprising 3 task categories across 12 datasets. Experiments on this benchmark demonstrate that our model effectively balances model size and performance, showing strong competitiveness compared to existing small-scale LLMs.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: sentiment analysis
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 4853
Loading