Are LLM Belief Updates Consistent with Bayes' Theorem?
By: Sohaib Imran , Ihor Kendiukhov , Matthew Broerman and more
Potential Business Impact:
Makes AI smarter at changing its mind with facts.
Do larger and more capable language models learn to update their "beliefs" about propositions more consistently with Bayes' theorem when presented with evidence in-context? To test this, we formulate a Bayesian Coherence Coefficient (BCC) metric and generate a dataset with which to measure the BCC. We measure BCC for multiple pre-trained-only language models across five model families, comparing against the number of model parameters, the amount of training data, and model scores on common benchmarks. Our results provide evidence for our hypothesis that larger and more capable pre-trained language models assign credences that are more coherent with Bayes' theorem. These results have important implications for our understanding and governance of LLMs.
Similar Papers
Incoherent Beliefs & Inconsistent Actions in Large Language Models
Machine Learning (CS)
Computers struggle to learn and act reliably.
Large Language Models as Discounted Bayesian Filters
Artificial Intelligence
Helps AI learn from new information faster.
A Benchmark for Zero-Shot Belief Inference in Large Language Models
Computation and Language
Helps computers understand what people believe.