Flaw or Artifact? Rethinking Prompt Sensitivity in Evaluating LLMs
By: Andong Hua , Kenan Tang , Chenhe Gu and more
Potential Business Impact:
Computers understand words better, even if rephrased.
Prompt sensitivity, referring to the phenomenon where paraphrasing (i.e., repeating something written or spoken using different words) leads to significant changes in large language model (LLM) performance, has been widely accepted as a core limitation of LLMs. In this work, we revisit this issue and ask: Is the widely reported high prompt sensitivity truly an inherent weakness of LLMs, or is it largely an artifact of evaluation processes? To answer this question, we systematically evaluate 7 LLMs (e.g., GPT and Gemini family) across 6 benchmarks, including both multiple-choice and open-ended tasks on 12 diverse prompt templates. We find that much of the prompt sensitivity stems from heuristic evaluation methods, including log-likelihood scoring and rigid answer matching, which often overlook semantically correct responses expressed through alternative phrasings, such as synonyms or paraphrases. When we adopt LLM-as-a-Judge evaluations, we observe a substantial reduction in performance variance and a consistently higher correlation in model rankings across prompts. Our findings suggest that modern LLMs are more robust to prompt templates than previously believed, and that prompt sensitivity may be more an artifact of evaluation than a flaw in the models.
Similar Papers
Mapping from Meaning: Addressing the Miscalibration of Prompt-Sensitive Language Models
Computation and Language
Makes AI understand questions more reliably.
Promptception: How Sensitive Are Large Multimodal Models to Prompts?
CV and Pattern Recognition
Makes AI answer questions more fairly.
A Human-AI Comparative Analysis of Prompt Sensitivity in LLM-Based Relevance Judgment
Information Retrieval
Makes computers better at judging search results.