Advancing Sentiment Analysis: A Novel LSTM Framework with Multi-head Attention
By: Jingyuan Yi , Peiyang Yu , Tianyi Huang and more
Potential Business Impact:
Helps computers understand if writing is happy or sad.
This work proposes an LSTM-based sentiment classification model with multi-head attention mechanism and TF-IDF optimization. Through the integration of TF-IDF feature extraction and multi-head attention, the model significantly improves text sentiment analysis performance. Experimental results on public data sets demonstrate that the new method achieves substantial improvements in the most critical metrics like accuracy, recall, and F1-score compared to baseline models. Specifically, the model achieves an accuracy of 80.28% on the test set, which is improved by about 12% in comparison with standard LSTM models. Ablation experiments also support the necessity and necessity of all modules, in which the impact of multi-head attention is greatest to performance improvement. This research provides a proper approach to sentiment analysis, which can be utilized in public opinion monitoring, product recommendation, etc.
Similar Papers
TWSSenti: A Novel Hybrid Framework for Topic-Wise Sentiment Analysis on Social Media Using Transformer Models
Computation and Language
Reads feelings from online words better.
Hybrid Extractive Abstractive Summarization for Multilingual Sentiment Analysis
Computation and Language
Understands feelings in many languages faster.
Multi-Modal Opinion Integration for Financial Sentiment Analysis using Cross-Modal Attention
Machine Learning (CS)
Helps predict stock prices by understanding opinions.