Score: 1

Are LLM Belief Updates Consistent with Bayes' Theorem?

Published: July 23, 2025 | arXiv ID: 2507.17951v1

By: Sohaib Imran , Ihor Kendiukhov , Matthew Broerman and more

Potential Business Impact:

Makes AI smarter at changing its mind with facts.

Business Areas:
A/B Testing Data and Analytics

Do larger and more capable language models learn to update their "beliefs" about propositions more consistently with Bayes' theorem when presented with evidence in-context? To test this, we formulate a Bayesian Coherence Coefficient (BCC) metric and generate a dataset with which to measure the BCC. We measure BCC for multiple pre-trained-only language models across five model families, comparing against the number of model parameters, the amount of training data, and model scores on common benchmarks. Our results provide evidence for our hypothesis that larger and more capable pre-trained language models assign credences that are more coherent with Bayes' theorem. These results have important implications for our understanding and governance of LLMs.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Computation and Language