Large Language Models Are Bad Dice Players: LLMs Struggle to Generate Random Numbers from Statistical Distributions
By: Minda Zhao, Yilun Du, Mengyu Wang
Potential Business Impact:
Computers can't reliably make random choices.
As large language models (LLMs) transition from chat interfaces to integral components of stochastic pipelines across domains like educational assessment and synthetic data construction, the ability to faithfully sample from specified probability distributions has become a functional requirement rather than a theoretical curiosity. We present the first large-scale, statistically powered audit of native probabilistic sampling in frontier LLMs, benchmarking 11 models across 15 distributions. To disentangle failure modes, we employ a dual-protocol design: Batch Generation, where a model produces N=1000 samples within one response, and Independent Requests, comprising $N=1000$ stateless calls. We observe a sharp protocol asymmetry: batch generation achieves only modest statistical validity, with a 13% median pass rate, while independent requests collapse almost entirely, with 10 of 11 models passing none of the distributions. Beyond this asymmetry, we reveal that sampling fidelity degrades monotonically with distributional complexity and aggravates as the requested sampling horizon N increases. Finally, we demonstrate the propagation of these failures into downstream tasks: models fail to enforce uniform answer-position constraints in MCQ generation and systematically violate demographic targets in attribute-constrained text-to-image prompt synthesis. These findings indicate that current LLMs lack a functional internal sampler, necessitating the use of external tools for applications requiring statistical guarantees.
Similar Papers
Failure to Mix: Large language models struggle to answer according to desired probability distributions
Machine Learning (CS)
AI models can't follow simple chance rules.
Evaluating the Use of Large Language Models as Synthetic Social Agents in Social Science Research
Artificial Intelligence
Makes AI better at guessing, not knowing for sure.
Epidemiology of Large Language Models: A Benchmark for Observational Distribution Knowledge
Artificial Intelligence
Tests if computers understand real-world chances.