Score: 2

Batch-of-Thought: Cross-Instance Learning for Enhanced LLM Reasoning

Published: January 6, 2026 | arXiv ID: 2601.02950v1

By: Xuan Yang , Furong Jia , Roy Xie and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Makes AI think better by comparing answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Current Large Language Model reasoning systems process queries independently, discarding valuable cross-instance signals such as shared reasoning patterns and consistency constraints. We introduce Batch-of-Thought (BoT), a training-free method that processes related queries jointly to enable cross-instance learning. By performing comparative analysis across batches, BoT identifies high-quality reasoning templates, detects errors through consistency checks, and amortizes computational costs. We instantiate BoT within a multi-agent reflection architecture (BoT-R), where a Reflector performs joint evaluation to unlock mutual information gain unavailable in isolated processing. Experiments across three model families and six benchmarks demonstrate that BoT-R consistently improves accuracy and confidence calibration while reducing inference costs by up to 61%. Our theoretical and experimental analysis reveals when and why batch-aware reasoning benefits LLM systems.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ United States, China

Page Count
18 pages

Category
Computer Science:
Artificial Intelligence