Enhancing Sentiment Classification and Irony Detection in Large Language Models through Advanced Prompt Engineering Techniques
By: Marvin Schmitt, Anne Schwerk, Sebastian Lempert
This study investigates the use of prompt engineering to enhance large language models (LLMs), specifically GPT-4o-mini and gemini-1.5-flash, in sentiment analysis tasks. It evaluates advanced prompting techniques like few-shot learning, chain-of-thought prompting, and self-consistency against a baseline. Key tasks include sentiment classification, aspect-based sentiment analysis, and detecting subtle nuances such as irony. The research details the theoretical background, datasets, and methods used, assessing performance of LLMs as measured by accuracy, recall, precision, and F1 score. Findings reveal that advanced prompting significantly improves sentiment analysis, with the few-shot approach excelling in GPT-4o-mini and chain-of-thought prompting boosting irony detection in gemini-1.5-flash by up to 46%. Thus, while advanced prompting techniques overall improve performance, the fact that few-shot prompting works best for GPT-4o-mini and chain-of-thought excels in gemini-1.5-flash for irony detection suggests that prompting strategies must be tailored to both the model and the task. This highlights the importance of aligning prompt design with both the LLM's architecture and the semantic complexity of the task.
Similar Papers
Prompt engineering does not universally improve Large Language Model performance across clinical decision-making tasks
Computation and Language
Helps doctors make better patient diagnoses and treatments.
Prompt Engineering and the Effectiveness of Large Language Models in Enhancing Human Productivity
Human-Computer Interaction
Clear instructions make AI work better.
Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs
Computers and Society
Makes AI doctors more honest about what they know.