Score: 0

Fast, Slow, and Tool-augmented Thinking for LLMs: A Review

Published: August 17, 2025 | arXiv ID: 2508.12265v1

By: Xinda Jia , Jinpeng Li , Zezhong Wang and more

Potential Business Impact:

Helps computers think faster or slower when needed.

Large Language Models (LLMs) have demonstrated remarkable progress in reasoning across diverse domains. However, effective reasoning in real-world tasks requires adapting the reasoning strategy to the demands of the problem, ranging from fast, intuitive responses to deliberate, step-by-step reasoning and tool-augmented thinking. Drawing inspiration from cognitive psychology, we propose a novel taxonomy of LLM reasoning strategies along two knowledge boundaries: a fast/slow boundary separating intuitive from deliberative processes, and an internal/external boundary distinguishing reasoning grounded in the model's parameters from reasoning augmented by external tools. We systematically survey recent work on adaptive reasoning in LLMs and categorize methods based on key decision factors. We conclude by highlighting open challenges and future directions toward more adaptive, efficient, and reliable LLMs.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Computation and Language