ReEfBench: Quantifying the Reasoning Efficiency of LLMs
By: Zhizhang Fu , Yuancheng Gu , Chenkai Hu and more
Potential Business Impact:
Finds if AI truly reasons or just talks a lot.
Test-time scaling has enabled Large Language Models (LLMs) to tackle complex reasoning, yet the limitations of current Chain-of-Thought (CoT) evaluation obscures whether performance gains stem from genuine reasoning or mere verbosity. To address this, (1) we propose a novel neuro-symbolic framework for the non-intrusive, comprehensive process-centric evaluation of reasoning. (2) Through this lens, we identify four distinct behavioral prototypes and diagnose the failure modes. (3) We examine the impact of inference mode, training strategy, and model scale. Our analysis reveals that extended token generation is not a prerequisite for deep reasoning. Furthermore, we reveal critical constraints: mixing long and short CoT data in training risks in premature saturation and collapse, while distillation into smaller models captures behavioral length but fails to replicate logical efficacy due to intrinsic capacity limits.
Similar Papers
EffiReason-Bench: A Unified Benchmark for Evaluating and Advancing Efficient Reasoning in Large Language Models
Computation and Language
Makes AI explain things shorter and smarter.
Correct, Concise and Complete: Multi-stage Training For Adaptive Reasoning
Computation and Language
Makes AI think less to solve problems faster.
Towards Thinking-Optimal Scaling of Test-Time Compute for LLM Reasoning
Computation and Language
Makes AI better at math by thinking just enough.