Score: 2

Exploring the Necessity of Reasoning in LLM-based Agent Scenarios

Published: March 14, 2025 | arXiv ID: 2503.11074v2

By: Xueyang Zhou , Guiyao Tie , Guowen Zhang and more

Potential Business Impact:

New AI thinks better, but sometimes too much.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The rise of Large Reasoning Models (LRMs) signifies a paradigm shift toward advanced computational reasoning. Yet, this progress disrupts traditional agent frameworks, traditionally anchored by execution-oriented Large Language Models (LLMs). To explore this transformation, we propose the LaRMA framework, encompassing nine tasks across Tool Usage, Plan Design, and Problem Solving, assessed with three top LLMs (e.g., Claude3.5-sonnet) and five leading LRMs (e.g., DeepSeek-R1). Our findings address four research questions: LRMs surpass LLMs in reasoning-intensive tasks like Plan Design, leveraging iterative reflection for superior outcomes; LLMs excel in execution-driven tasks such as Tool Usage, prioritizing efficiency; hybrid LLM-LRM configurations, pairing LLMs as actors with LRMs as reflectors, optimize agent performance by blending execution speed with reasoning depth; and LRMs' enhanced reasoning incurs higher computational costs, prolonged processing, and behavioral challenges, including overthinking and fact-ignoring tendencies. This study fosters deeper inquiry into LRMs' balance of deep thinking and overthinking, laying a critical foundation for future agent design advancements.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ China, United States

Repos / Data Links

Page Count
71 pages

Category
Computer Science:
Artificial Intelligence