Large Language Models as Discounted Bayesian Filters
By: Jensen Zhang, Jing Yang, Keze Wang
Large Language Models (LLMs) demonstrate strong few-shot generalization through in-context learning, yet their reasoning in dynamic and stochastic environments remains opaque. Prior studies mainly focus on static tasks and overlook the online adaptation required when beliefs must be continuously updated, which is a key capability for LLMs acting as world models or agents. We introduce a Bayesian filtering framework to evaluate online inference in LLMs. Our probabilistic probe suite spans both multivariate discrete distributions, such as dice rolls, and continuous distributions, such as Gaussian processes, where ground-truth parameters shift over time. We find that while LLM belief updates resemble Bayesian posteriors, they are more accurately characterized by an exponential forgetting filter with a model-specific discount factor smaller than one. This reveals systematic discounting of older evidence that varies significantly across model architectures. Although inherent priors are often miscalibrated, the updating mechanism itself remains structured and principled. We further validate these findings in a simulated agent task and propose prompting strategies that effectively recalibrate priors with minimal computational cost.
Similar Papers
Bayesian Teaching Enables Probabilistic Reasoning in Large Language Models
Computation and Language
Teaches computers to learn and guess better.
Evaluating the Use of Large Language Models as Synthetic Social Agents in Social Science Research
Artificial Intelligence
Makes AI better at guessing, not knowing for sure.
LLM-BI: Towards Fully Automated Bayesian Inference with Large Language Models
Artificial Intelligence
Lets computers learn from simple instructions.