Score: 0

ReEfBench: Quantifying the Reasoning Efficiency of LLMs

Published: January 7, 2026 | arXiv ID: 2601.03550v1

By: Zhizhang Fu , Yuancheng Gu , Chenkai Hu and more

Potential Business Impact:

Finds if AI truly reasons or just talks a lot.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Test-time scaling has enabled Large Language Models (LLMs) to tackle complex reasoning, yet the limitations of current Chain-of-Thought (CoT) evaluation obscures whether performance gains stem from genuine reasoning or mere verbosity. To address this, (1) we propose a novel neuro-symbolic framework for the non-intrusive, comprehensive process-centric evaluation of reasoning. (2) Through this lens, we identify four distinct behavioral prototypes and diagnose the failure modes. (3) We examine the impact of inference mode, training strategy, and model scale. Our analysis reveals that extended token generation is not a prerequisite for deep reasoning. Furthermore, we reveal critical constraints: mixing long and short CoT data in training risks in premature saturation and collapse, while distillation into smaller models captures behavioral length but fails to replicate logical efficacy due to intrinsic capacity limits.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
20 pages

Category
Computer Science:
Artificial Intelligence