PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
By: Jingcheng Hu , Yinmin Zhang , Shijie Shang and more
Potential Business Impact:
Lets computers solve harder math problems faster.
We introduce Parallel Coordinated Reasoning (PaCoRe), a training-and-inference framework designed to overcome a central limitation of contemporary language models: their inability to scale test-time compute (TTC) far beyond sequential reasoning under a fixed context window. PaCoRe departs from the traditional sequential paradigm by driving TTC through massive parallel exploration coordinated via a message-passing architecture in multiple rounds. Each round launches many parallel reasoning trajectories, compacts their findings into context-bounded messages, and synthesizes these messages to guide the next round and ultimately produce the final answer. Trained end-to-end with large-scale, outcome-based reinforcement learning, the model masters the synthesis abilities required by PaCoRe and scales to multi-million-token effective TTC without exceeding context limits. The approach yields strong improvements across diverse domains, and notably pushes reasoning beyond frontier systems in mathematics: an 8B model reaches 94.5% on HMMT 2025, surpassing GPT-5's 93.2% by scaling effective TTC to roughly two million tokens. We open-source model checkpoints, training data, and the full inference pipeline to accelerate follow-up work.
Similar Papers
Learning Adaptive Parallel Reasoning with Language Models
Artificial Intelligence
Lets computers think smarter, faster, and more accurately.
Towards Thinking-Optimal Scaling of Test-Time Compute for LLM Reasoning
Computation and Language
Makes AI better at math by thinking just enough.
Correct, Concise and Complete: Multi-stage Training For Adaptive Reasoning
Computation and Language
Makes AI think less to solve problems faster.