Fast, Slow, and Tool-augmented Thinking for LLMs: A Review
By: Xinda Jia , Jinpeng Li , Zezhong Wang and more
Potential Business Impact:
Helps computers think faster or slower when needed.
Large Language Models (LLMs) have demonstrated remarkable progress in reasoning across diverse domains. However, effective reasoning in real-world tasks requires adapting the reasoning strategy to the demands of the problem, ranging from fast, intuitive responses to deliberate, step-by-step reasoning and tool-augmented thinking. Drawing inspiration from cognitive psychology, we propose a novel taxonomy of LLM reasoning strategies along two knowledge boundaries: a fast/slow boundary separating intuitive from deliberative processes, and an internal/external boundary distinguishing reasoning grounded in the model's parameters from reasoning augmented by external tools. We systematically survey recent work on adaptive reasoning in LLMs and categorize methods based on key decision factors. We conclude by highlighting open challenges and future directions toward more adaptive, efficient, and reliable LLMs.
Similar Papers
A Survey of Slow Thinking-based Reasoning LLMs using Reinforced Learning and Inference-time Scaling Law
Artificial Intelligence
Computers learn to think deeply like people.
Decoupling Knowledge and Reasoning in LLMs: An Exploration Using Cognitive Dual-System Theory
Artificial Intelligence
Shows how computers use facts and thinking.
From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models
Artificial Intelligence
Computers change how they think based on how hard a problem is.