Mapping from Meaning: Addressing the Miscalibration of Prompt-Sensitive Language Models
By: Kyle Cox , Jiawei Xu , Yikun Han and more
Potential Business Impact:
Makes AI understand questions more reliably.
An interesting behavior in large language models (LLMs) is prompt sensitivity. When provided with different but semantically equivalent versions of the same prompt, models may produce very different distributions of answers. This suggests that the uncertainty reflected in a model's output distribution for one prompt may not reflect the model's uncertainty about the meaning of the prompt. We model prompt sensitivity as a type of generalization error, and show that sampling across the semantic ``concept space'' with paraphrasing perturbations improves uncertainty calibration without compromising accuracy. Additionally, we introduce a new metric for uncertainty decomposition in black-box LLMs that improves upon entropy-based decomposition by modeling semantic continuities in natural language generation. We show that this decomposition metric can be used to quantify how much LLM uncertainty is attributed to prompt sensitivity. Our work introduces a new way to improve uncertainty calibration in prompt-sensitive language models, and provides evidence that some LLMs fail to exhibit consistent general reasoning about the meanings of their inputs.
Similar Papers
Flaw or Artifact? Rethinking Prompt Sensitivity in Evaluating LLMs
Computation and Language
Computers understand words better, even if rephrased.
Promptception: How Sensitive Are Large Multimodal Models to Prompts?
CV and Pattern Recognition
Makes AI answer questions more fairly.
Prompt Stability in Code LLMs: Measuring Sensitivity across Emotion- and Personality-Driven Variations
Software Engineering
Makes AI write code the same, no matter how you ask.