Failure to Mix: Large language models struggle to answer according to desired probability distributions
By: Ivy Yuqian Yang, David Yu Zhang
Potential Business Impact:
AI models can't follow simple chance rules.
Scientific idea generation and selection requires exploration following a target probability distribution. In contrast, current AI benchmarks have objectively correct answers, and training large language models (LLMs) via reinforcement learning against these benchmarks discourages probabilistic exploration. Here, we conducted systematic experiments requesting LLMs to produce outputs following simple probabilistic distributions, and found that all modern LLMs tested grossly fail to follow the distributions. For example, requesting a binary output of "1" 49% of the time produces an answer of "0" nearly 100% of the time. This step function-like behavior of near-exclusively generating the output with marginally highest probability even overrules even strong in-built LLM biases.
Similar Papers
Epidemiology of Large Language Models: A Benchmark for Observational Distribution Knowledge
Artificial Intelligence
Tests if computers understand real-world chances.
Evaluating the Use of Large Language Models as Synthetic Social Agents in Social Science Research
Artificial Intelligence
Makes AI better at guessing, not knowing for sure.
Incoherent Beliefs & Inconsistent Actions in Large Language Models
Machine Learning (CS)
Computers struggle to learn and act reliably.