Bayesian Mixture of Experts For Large Language Models
By: Maryam Dialameh , Hossein Rajabzadeh , Weiwei Zhang and more
Potential Business Impact:
Helps AI know when it's unsure about answers.
We present Bayesian Mixture of Experts (Bayesian-MoE), a post-hoc uncertainty estimation framework for fine-tuned large language models (LLMs) based on Mixture-of-Experts architectures. Our method applies a structured Laplace approximation to the second linear layer of each expert, enabling calibrated uncertainty estimation without modifying the original training procedure or introducing new parameters. Unlike prior approaches, which apply Bayesian inference to added adapter modules, Bayesian-MoE directly targets the expert pathways already present in MoE models, leveraging their modular design for tractable block-wise posterior estimation. We use Kronecker-factored low-rank approximations to model curvature and derive scalable estimates of predictive uncertainty and marginal likelihood. Experiments on common-sense reasoning benchmarks with Qwen1.5-MoE and DeepSeek-MoE demonstrate that Bayesian-MoE improves both expected calibration error (ECE) and negative log-likelihood (NLL) over baselines, confirming its effectiveness for reliable downstream decision-making.
Similar Papers
Bayesian Mixture-of-Experts: Towards Making LLMs Know What They Don't Know
Machine Learning (CS)
Makes AI know when it's unsure.
MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models
Machine Learning (CS)
Makes AI smarter and faster by using many smart parts.
MoMoE: A Mixture of Expert Agent Model for Financial Sentiment Analysis
Computational Engineering, Finance, and Science
Makes AI smarter by letting many AI parts work together.