Score: 0

Flaw or Artifact? Rethinking Prompt Sensitivity in Evaluating LLMs

Published: September 1, 2025 | arXiv ID: 2509.01790v1

By: Andong Hua , Kenan Tang , Chenhe Gu and more

Potential Business Impact:

Computers understand words better, even if rephrased.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Prompt sensitivity, referring to the phenomenon where paraphrasing (i.e., repeating something written or spoken using different words) leads to significant changes in large language model (LLM) performance, has been widely accepted as a core limitation of LLMs. In this work, we revisit this issue and ask: Is the widely reported high prompt sensitivity truly an inherent weakness of LLMs, or is it largely an artifact of evaluation processes? To answer this question, we systematically evaluate 7 LLMs (e.g., GPT and Gemini family) across 6 benchmarks, including both multiple-choice and open-ended tasks on 12 diverse prompt templates. We find that much of the prompt sensitivity stems from heuristic evaluation methods, including log-likelihood scoring and rigid answer matching, which often overlook semantically correct responses expressed through alternative phrasings, such as synonyms or paraphrases. When we adopt LLM-as-a-Judge evaluations, we observe a substantial reduction in performance variance and a consistently higher correlation in model rankings across prompts. Our findings suggest that modern LLMs are more robust to prompt templates than previously believed, and that prompt sensitivity may be more an artifact of evaluation than a flaw in the models.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Computation and Language