Semantic-preserved Augmentation with Confidence-weighted Fine-tuning for Aspect Category Sentiment Analysis
By: Yaping Chai, Haoran Xie, Joe S. Qin
Potential Business Impact:
Teaches computers to understand opinions better.
Large language model (LLM) is an effective approach to addressing data scarcity in low-resource scenarios. Recent existing research designs hand-crafted prompts to guide LLM for data augmentation. We introduce a data augmentation strategy for the aspect category sentiment analysis (ACSA) task that preserves the original sentence semantics and has linguistic diversity, specifically by providing a structured prompt template for an LLM to generate predefined content. In addition, we employ a post-processing technique to further ensure semantic consistency between the generated sentence and the original sentence. The augmented data increases the semantic coverage of the training distribution, enabling the model better to understand the relationship between aspect categories and sentiment polarities, enhancing its inference capabilities. Furthermore, we propose a confidence-weighted fine-tuning strategy to encourage the model to generate more confident and accurate sentiment polarity predictions. Compared with powerful and recent works, our method consistently achieves the best performance on four benchmark datasets over all baselines.
Similar Papers
Emotion-Enhanced Multi-Task Learning with LLMs for Aspect Category Sentiment Analysis
Computation and Language
Teaches computers to understand feelings behind words.
LACA: Improving Cross-lingual Aspect-Based Sentiment Analysis with LLM Data Augmentation
Computation and Language
Lets computers understand feelings in any language.
LLM-based Semantic Augmentation for Harmful Content Detection
Computation and Language
Cleans internet text to fight bad posts.