Score: 2

Targeted Distillation for Sentiment Analysis

Published: March 5, 2025 | arXiv ID: 2503.03225v1

By: Yice Zhang , Guangyu Xie , Jingjie Lin and more

Potential Business Impact:

Makes small computers understand feelings in text.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper presents a compact model that achieves strong sentiment analysis capabilities through targeted distillation from advanced large language models (LLMs). Our methodology decouples the distillation target into two key components: sentiment-related knowledge and task alignment. To transfer these components, we propose a two-stage distillation framework. The first stage, knowledge-driven distillation (\textsc{KnowDist}), transfers sentiment-related knowledge to enhance fundamental sentiment analysis capabilities. The second stage, in-context learning distillation (\textsc{ICLDist}), transfers task-specific prompt-following abilities to optimize task alignment. For evaluation, we introduce \textsc{SentiBench}, a comprehensive sentiment analysis benchmark comprising 3 task categories across 12 datasets. Experiments on this benchmark demonstrate that our model effectively balances model size and performance, showing strong competitiveness compared to existing small-scale LLMs.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language