The Instability of Safety: How Random Seeds and Temperature Expose Inconsistent LLM Refusal Behavior
By: Erik Larsen
Potential Business Impact:
AI can be tricked into saying bad things.
Current safety evaluations of large language models rely on single-shot testing, implicitly assuming that model responses are deterministic and representative of the model's safety alignment. We challenge this assumption by investigating the stability of safety refusal decisions across random seeds and temperature settings. Testing four instruction-tuned models from three families (Llama 3.1 8B, Qwen 2.5 7B, Qwen 3 8B, Gemma 3 12B) on 876 harmful prompts across 20 different sampling configurations (4 temperatures x 5 random seeds), we find that 18-28% of prompts exhibit decision flips--the model refuses in some configurations but complies in others--depending on the model. Our Safety Stability Index (SSI) reveals that higher temperatures significantly reduce decision stability (Friedman chi-squared = 44.71, p < 0.001), with mean SSI dropping from 0.951 at temperature 0.0 to 0.896 at temperature 1.0. We validate our findings across all model families using Claude 3.5 Haiku as a unified external judge, achieving 89.0% inter-judge agreement with our primary Llama 70B judge (Cohen's kappa = 0.62). These findings demonstrate that single-shot safety evaluations are insufficient for reliable safety assessment. We show that single-shot evaluation agrees with multi-sample ground truth only 92.4% of the time, and recommend using at least 3 samples per prompt for reliable safety assessment.
Similar Papers
When Refusals Fail: Unstable Safety Mechanisms in Long-Context LLM Agents
Machine Learning (CS)
Makes AI remember more to do harder jobs.
Should LLM Safety Be More Than Refusing Harmful Instructions?
Computation and Language
Makes AI safer with tricky hidden words.
Rethinking Safety in LLM Fine-tuning: An Optimization Perspective
Machine Learning (CS)
Keeps AI safe when learning new things.