Prompt-Based Clarity Evaluation and Topic Detection in Political Question Answering
By: Lavanya Prahallad , Sai Utkarsh Choudarypally , Pragna Prahallad and more
Automatic evaluation of large language model (LLM) responses requires not only factual correctness but also clarity, particularly in political question-answering. While recent datasets provide human annotations for clarity and evasion, the impact of prompt design on automatic clarity evaluation remains underexplored. In this paper, we study prompt-based clarity evaluation using the CLARITY dataset from the SemEval 2026 shared task. We compare a GPT-3.5 baseline provided with the dataset against GPT-5.2 evaluated under three prompting strategies: simple prompting, chain-of-thought prompting, and chain-of-thought with few-shot examples. Model predictions are evaluated against human annotations using accuracy and class-wise metrics for clarity and evasion, along with hierarchical exact match. Results show that GPT-5.2 consistently outperforms the GPT-3.5 baseline on clarity prediction, with accuracy improving from 56 percent to 63 percent under chain-of-thought with few-shot prompting. Chain-of-thought prompting yields the highest evasion accuracy at 34 percent, though improvements are less stable across fine-grained evasion categories. We further evaluate topic identification and find that reasoning-based prompting improves accuracy from 60 percent to 74 percent relative to human annotations. Overall, our findings indicate that prompt design reliably improves high-level clarity evaluation, while fine-grained evasion and topic detection remain challenging despite structured reasoning prompts.
Similar Papers
Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs
Computers and Society
Makes AI doctors more honest about what they know.
DETAIL Matters: Measuring the Impact of Prompt Specificity on Reasoning in Large Language Models
Computation and Language
Makes AI smarter by giving it clearer instructions.
Dissecting Clinical Reasoning in Language Models: A Comparative Study of Prompts and Model Adaptation Strategies
Computation and Language
Helps doctors understand patient notes better.