Prompt Sentiment: The Catalyst for LLM Change
By: Vishal Gandhi, Sagar Gandhi
Potential Business Impact:
Makes AI answers better by checking how you ask.
The rise of large language models (LLMs) has revolutionized natural language processing (NLP), yet the influence of prompt sentiment, a latent affective characteristic of input text, remains underexplored. This study systematically examines how sentiment variations in prompts affect LLM-generated outputs in terms of coherence, factuality, and bias. Leveraging both lexicon-based and transformer-based sentiment analysis methods, we categorize prompts and evaluate responses from five leading LLMs: Claude, DeepSeek, GPT-4, Gemini, and LLaMA. Our analysis spans six AI-driven applications, including content generation, conversational AI, legal and financial analysis, healthcare AI, creative writing, and technical documentation. By transforming prompts, we assess their impact on output quality. Our findings reveal that prompt sentiment significantly influences model responses, with negative prompts often reducing factual accuracy and amplifying bias, while positive prompts tend to increase verbosity and sentiment propagation. These results highlight the importance of sentiment-aware prompt engineering for ensuring fair and reliable AI-generated content.
Similar Papers
Green Prompting
Computation and Language
Makes AI use less electricity by changing its questions.
Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, LLaMA
Computation and Language
Makes AI understand questions better by being polite.
Use Me Wisely: AI-Driven Assessment for LLM Prompting Skills Development
Computers and Society
Teaches computers to grade student writing automatically.