Beyond Stars: Bridging the Gap Between Ratings and Review Sentiment with LLM
By: Najla Zuhir , Amna Mohammad Salim , Parvathy Premkumar and more
Potential Business Impact:
Helps apps understand what users *really* mean.
We present an advanced approach to mobile app review analysis aimed at addressing limitations inherent in traditional star-rating systems. Star ratings, although intuitive and popular among users, often fail to capture the nuanced feedback present in detailed review texts. Traditional NLP techniques -- such as lexicon-based methods and classical machine learning classifiers -- struggle to interpret contextual nuances, domain-specific terminology, and subtle linguistic features like sarcasm. To overcome these limitations, we propose a modular framework leveraging large language models (LLMs) enhanced by structured prompting techniques. Our method quantifies discrepancies between numerical ratings and textual sentiment, extracts detailed, feature-level insights, and supports interactive exploration of reviews through retrieval-augmented conversational question answering (RAG-QA). Comprehensive experiments conducted on three diverse datasets (AWARE, Google Play, and Spotify) demonstrate that our LLM-driven approach significantly surpasses baseline methods, yielding improved accuracy, robustness, and actionable insights in challenging and context-rich review scenarios.
Similar Papers
Evaluating LLM-Based Mobile App Recommendations: An Empirical Study
Information Retrieval
Shows how smart computer programs pick apps.
A Review on Large Language Models for Visual Analytics
Human-Computer Interaction
Lets computers understand pictures and words together.
From Reviews to Actionable Insights: An LLM-Based Approach for Attribute and Feature Extraction
Machine Learning (Stat)
Helps businesses understand customer feedback faster.