Targeted Distillation for Sentiment Analysis
By: Yice Zhang , Guangyu Xie , Jingjie Lin and more
Potential Business Impact:
Makes small computers understand feelings in text.
This paper presents a compact model that achieves strong sentiment analysis capabilities through targeted distillation from advanced large language models (LLMs). Our methodology decouples the distillation target into two key components: sentiment-related knowledge and task alignment. To transfer these components, we propose a two-stage distillation framework. The first stage, knowledge-driven distillation (\textsc{KnowDist}), transfers sentiment-related knowledge to enhance fundamental sentiment analysis capabilities. The second stage, in-context learning distillation (\textsc{ICLDist}), transfers task-specific prompt-following abilities to optimize task alignment. For evaluation, we introduce \textsc{SentiBench}, a comprehensive sentiment analysis benchmark comprising 3 task categories across 12 datasets. Experiments on this benchmark demonstrate that our model effectively balances model size and performance, showing strong competitiveness compared to existing small-scale LLMs.
Similar Papers
Comprehensive and Efficient Distillation for Lightweight Sentiment Analysis Models
Computation and Language
Makes small AI understand feelings like big AI.
Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs
Computation and Language
Filters chat to save computer power.
Transferable text data distillation by trajectory matching
Computation and Language
Makes big computer brains learn with less information.