LLM-BI: Towards Fully Automated Bayesian Inference with Large Language Models
By: Yongchao Huang
Potential Business Impact:
Lets computers learn from simple instructions.
A significant barrier to the widespread adoption of Bayesian inference is the specification of prior distributions and likelihoods, which often requires specialized statistical expertise. This paper investigates the feasibility of using a Large Language Model (LLM) to automate this process. We introduce LLM-BI (Large Language Model-driven Bayesian Inference), a conceptual pipeline for automating Bayesian workflows. As a proof-of-concept, we present two experiments focused on Bayesian linear regression. In Experiment I, we demonstrate that an LLM can successfully elicit prior distributions from natural language. In Experiment II, we show that an LLM can specify the entire model structure, including both priors and the likelihood, from a single high-level problem description. Our results validate the potential of LLMs to automate key steps in Bayesian modeling, enabling the possibility of an automated inference pipeline for probabilistic programming.
Similar Papers
Bayesian Teaching Enables Probabilistic Reasoning in Large Language Models
Computation and Language
Teaches computers to learn and guess better.
Ensemble Bayesian Inference: Leveraging Small Language Models to Achieve LLM-level Accuracy in Profile Matching Tasks
Computation and Language
Small AI teams can beat big AI teams.
Can LLMs Assist Expert Elicitation for Probabilistic Causal Modeling?
Artificial Intelligence
Helps doctors understand health problems better.