The Wisdom of Deliberating AI Crowds: Does Deliberation Improve LLM-Based Forecasting?
By: Paul Schneider, Amalie Schramm
Structured deliberation has been found to improve the performance of human forecasters. This study investigates whether a similar intervention, i.e. allowing LLMs to review each other's forecasts before updating, can improve accuracy in large language models (GPT-5, Claude Sonnet 4.5, Gemini Pro 2.5). Using 202 resolved binary questions from the Metaculus Q2 2025 AI Forecasting Tournament, accuracy was assessed across four scenarios: (1) diverse models with distributed information, (2) diverse models with shared information, (3) homogeneous models with distributed information, and (4) homogeneous models with shared information. Results show that the intervention significantly improves accuracy in scenario (2), reducing Log Loss by 0.020 or about 4 percent in relative terms (p = 0.017). However, when homogeneous groups (three instances of the same model) engaged in the same process, no benefit was observed. Unexpectedly, providing LLMs with additional contextual information did not improve forecast accuracy, limiting our ability to study information pooling as a mechanism. Our findings suggest that deliberation may be a viable strategy for improving LLM forecasting.
Similar Papers
Wisdom of the Crowd, Without the Crowd: A Socratic LLM for Asynchronous Deliberation on Perspectivist Data
Human-Computer Interaction
AI learns better by talking to itself.
Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Artificial Intelligence
Debating computers make better judgments than voting ones.
An LLM-based Delphi Study to Predict GenAI Evolution
Artificial Intelligence
Helps guess future of AI with smart computer talks.