Batch-of-Thought: Cross-Instance Learning for Enhanced LLM Reasoning
By: Xuan Yang , Furong Jia , Roy Xie and more
Potential Business Impact:
Makes AI think better by comparing answers.
Current Large Language Model reasoning systems process queries independently, discarding valuable cross-instance signals such as shared reasoning patterns and consistency constraints. We introduce Batch-of-Thought (BoT), a training-free method that processes related queries jointly to enable cross-instance learning. By performing comparative analysis across batches, BoT identifies high-quality reasoning templates, detects errors through consistency checks, and amortizes computational costs. We instantiate BoT within a multi-agent reflection architecture (BoT-R), where a Reflector performs joint evaluation to unlock mutual information gain unavailable in isolated processing. Experiments across three model families and six benchmarks demonstrate that BoT-R consistently improves accuracy and confidence calibration while reducing inference costs by up to 61%. Our theoretical and experimental analysis reveals when and why batch-aware reasoning benefits LLM systems.
Similar Papers
ReEfBench: Quantifying the Reasoning Efficiency of LLMs
Artificial Intelligence
Finds if AI truly reasons or just talks a lot.
Asynchronous Reasoning: Training-Free Interactive Thinking LLMs
Machine Learning (CS)
Lets AI think and talk at the same time.
Meta-Reasoner: Dynamic Guidance for Optimized Inference-time Reasoning in Large Language Models
Artificial Intelligence
Helps computers solve problems smarter and faster.