A Survey on Parallel Reasoning
By: Ziqi Wang , Boye Niu , Zipeng Gao and more
Potential Business Impact:
Helps computers think in many ways at once.
With the increasing capabilities of Large Language Models (LLMs), parallel reasoning has emerged as a new inference paradigm that enhances reasoning robustness by concurrently exploring multiple lines of thought before converging on a final answer. It has become a significant trend to explore parallel reasoning to overcome the fragility of standard sequential methods and improve practical performance. In this paper, we aim to survey and summarize the progress and challenges of parallel reasoning. We first present a formal definition of parallel reasoning and clarify its distinction from related concepts like Chain-of-Thought. Then, we organize and discuss advanced techniques based on a novel taxonomy, including non-interactive reasoning, interactive reasoning, and efficiency-focused decoding strategies. Additionally, we explore various application scenarios, such as solving complex problems and enhancing the reliability of LLM outputs.Finally, we highlight the core challenges of parallel reasoning and suggest potential directions for future research. We hope that our work can provide a useful roadmap for beginners and encourage more research on improving parallel reasoning methods. Related source can be avaliable in https://github.com/PPPP-kaqiu/Awesome-Parallel-Reasoning.
Similar Papers
Implicit Reasoning in Large Language Models: A Comprehensive Survey
Computation and Language
Lets computers think faster without showing steps.
Learning Adaptive Parallel Reasoning with Language Models
Artificial Intelligence
Lets computers think smarter, faster, and more accurately.
Parallel-R1: Towards Parallel Thinking via Reinforcement Learning
Computation and Language
Makes computers think in many ways to solve problems.