Multiplex Thinking: Reasoning via Token-wise Branch-and-Merge
By: Yao Tang , Li Dong , Yaru Hao and more
Potential Business Impact:
Computers think smarter by guessing many answers at once.
Large language models often solve complex reasoning tasks more effectively with Chain-of-Thought (CoT), but at the cost of long, low-bandwidth token sequences. Humans, by contrast, often reason softly by maintaining a distribution over plausible next steps. Motivated by this, we propose Multiplex Thinking, a stochastic soft reasoning mechanism that, at each thinking step, samples K candidate tokens and aggregates their embeddings into a single continuous multiplex token. This preserves the vocabulary embedding prior and the sampling dynamics of standard discrete generation, while inducing a tractable probability distribution over multiplex rollouts. Consequently, multiplex trajectories can be directly optimized with on-policy reinforcement learning (RL). Importantly, Multiplex Thinking is self-adaptive: when the model is confident, the multiplex token is nearly discrete and behaves like standard CoT; when it is uncertain, it compactly represents multiple plausible next steps without increasing sequence length. Across challenging math reasoning benchmarks, Multiplex Thinking consistently outperforms strong discrete CoT and RL baselines from Pass@1 through Pass@1024, while producing shorter sequences. The code and checkpoints are available at https://github.com/GMLR-Penn/Multiplex-Thinking.
Similar Papers
MyGO Multiplex CoT: A Method for Self-Reflection in Large Language Models via Double Chain of Thought Thinking
Computation and Language
Makes AI think twice to give better answers.
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think step-by-step" to solve harder problems.
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think" step-by-step to solve harder problems.