Score: 2

Bayesian Mixture of Experts For Large Language Models

Published: November 12, 2025 | arXiv ID: 2511.08968v1

By: Maryam Dialameh , Hossein Rajabzadeh , Weiwei Zhang and more

BigTech Affiliations: Huawei

Potential Business Impact:

Helps AI know when it's unsure about answers.

Business Areas:
A/B Testing Data and Analytics

We present Bayesian Mixture of Experts (Bayesian-MoE), a post-hoc uncertainty estimation framework for fine-tuned large language models (LLMs) based on Mixture-of-Experts architectures. Our method applies a structured Laplace approximation to the second linear layer of each expert, enabling calibrated uncertainty estimation without modifying the original training procedure or introducing new parameters. Unlike prior approaches, which apply Bayesian inference to added adapter modules, Bayesian-MoE directly targets the expert pathways already present in MoE models, leveraging their modular design for tractable block-wise posterior estimation. We use Kronecker-factored low-rank approximations to model curvature and derive scalable estimates of predictive uncertainty and marginal likelihood. Experiments on common-sense reasoning benchmarks with Qwen1.5-MoE and DeepSeek-MoE demonstrate that Bayesian-MoE improves both expected calibration error (ECE) and negative log-likelihood (NLL) over baselines, confirming its effectiveness for reliable downstream decision-making.

Country of Origin
🇨🇦 🇨🇳 China, Canada

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)