Enhancing Granular Sentiment Classification with Chain-of-Thought Prompting in Large Language Models
By: Vihaan Miriyala, Smrithi Bukkapatnam, Lavanya Prahallad
Potential Business Impact:
Helps computers understand feelings in app reviews better.
We explore the use of Chain-of-Thought (CoT) prompting with large language models (LLMs) to improve the accuracy of granular sentiment categorization in app store reviews. Traditional numeric and polarity-based ratings often fail to capture the nuanced sentiment embedded in user feedback. We evaluated the effectiveness of CoT prompting versus simple prompting on 2000 Amazon app reviews by comparing each method's predictions to human judgements. CoT prompting improved classification accuracy from 84% to 93% highlighting the benefit of explicit reasoning in enhancing sentiment analysis performance.
Similar Papers
Layered Chain-of-Thought Prompting for Multi-Agent LLM Systems: A Comprehensive Approach to Explainable Large Language Models
Computation and Language
Makes AI explain its thinking more clearly and correctly.
Prompting Science Report 2: The Decreasing Value of Chain of Thought in Prompting
Computation and Language
Helps AI think better, but costs more.
Chain-of-Conceptual-Thought: Eliciting the Agent to Deeply Think within the Response
Computation and Language
Helps AI understand feelings and give better advice.