Score: 0

Failure to Mix: Large language models struggle to answer according to desired probability distributions

Published: November 18, 2025 | arXiv ID: 2511.14630v1

By: Ivy Yuqian Yang, David Yu Zhang

Potential Business Impact:

AI models can't follow simple chance rules.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Scientific idea generation and selection requires exploration following a target probability distribution. In contrast, current AI benchmarks have objectively correct answers, and training large language models (LLMs) via reinforcement learning against these benchmarks discourages probabilistic exploration. Here, we conducted systematic experiments requesting LLMs to produce outputs following simple probabilistic distributions, and found that all modern LLMs tested grossly fail to follow the distributions. For example, requesting a binary output of "1" 49% of the time produces an answer of "0" nearly 100% of the time. This step function-like behavior of near-exclusively generating the output with marginally highest probability even overrules even strong in-built LLM biases.

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)