Score: 1

PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning

Published: January 9, 2026 | arXiv ID: 2601.05593v1

By: Jingcheng Hu , Yinmin Zhang , Shijie Shang and more

Potential Business Impact:

Lets computers solve harder math problems faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We introduce Parallel Coordinated Reasoning (PaCoRe), a training-and-inference framework designed to overcome a central limitation of contemporary language models: their inability to scale test-time compute (TTC) far beyond sequential reasoning under a fixed context window. PaCoRe departs from the traditional sequential paradigm by driving TTC through massive parallel exploration coordinated via a message-passing architecture in multiple rounds. Each round launches many parallel reasoning trajectories, compacts their findings into context-bounded messages, and synthesizes these messages to guide the next round and ultimately produce the final answer. Trained end-to-end with large-scale, outcome-based reinforcement learning, the model masters the synthesis abilities required by PaCoRe and scales to multi-million-token effective TTC without exceeding context limits. The approach yields strong improvements across diverse domains, and notably pushes reasoning beyond frontier systems in mathematics: an 8B model reaches 94.5% on HMMT 2025, surpassing GPT-5's 93.2% by scaling effective TTC to roughly two million tokens. We open-source model checkpoints, training data, and the full inference pipeline to accelerate follow-up work.


Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)