Solomonoff-Inspired Hypothesis Ranking with LLMs for Prediction Under Uncertainty
By: Josh Barber , Rourke Young , Cameron Coombe and more
Reasoning under uncertainty is a key challenge in AI, especially for real-world tasks, where problems with sparse data demands systematic generalisation. Existing approaches struggle to balance accuracy and simplicity when evaluating multiple candidate solutions. We propose a Solomonoff-inspired method that weights LLM-generated hypotheses by simplicity and predictive fit. Applied to benchmark (Mini-ARC) tasks, our method produces Solomonoff-weighted mixtures for per-cell predictions, yielding conservative, uncertainty-aware outputs even when hypotheses are noisy or partially incorrect. Compared to Bayesian Model Averaging (BMA), Solomonoff scoring spreads probability more evenly across competing hypotheses, while BMA concentrates weight on the most likely but potentially flawed candidates. Across tasks, this highlights the value of algorithmic information-theoretic priors for interpretable, reliable multi-hypothesis reasoning under uncertainty.
Similar Papers
Solomonoff-Inspired Hypothesis Ranking with LLMs for Prediction Under Uncertainty
Artificial Intelligence
AI learns better from less data.
OpenEstimate: Evaluating LLMs on Reasoning Under Uncertainty with Real-World Data
Artificial Intelligence
Helps computers guess answers when information is missing.
Thinking, Faithful and Stable: Mitigating Hallucinations in LLMs
Artificial Intelligence
Makes AI think more carefully and be more truthful.