Score: 1

LongReasonArena: A Long Reasoning Benchmark for Large Language Models

Published: August 26, 2025 | arXiv ID: 2508.19363v1

By: Jiayu Ding , Shuming Ma , Lei Cui and more

Potential Business Impact:

Tests if computers can think through long problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Existing long-context benchmarks for Large Language Models (LLMs) focus on evaluating comprehension of long inputs, while overlooking the evaluation of long reasoning abilities. To address this gap, we introduce LongReasonArena, a benchmark specifically designed to assess the long reasoning capabilities of LLMs. Our tasks require models to solve problems by executing multi-step algorithms that reflect key aspects of long reasoning, such as retrieval and backtracking. By controlling the inputs, the required reasoning length can be arbitrarily scaled, reaching up to 1 million tokens of reasoning for the most challenging tasks. Extensive evaluation results demonstrate that LongReasonArena presents a significant challenge for both open-source and proprietary LLMs. For instance, Deepseek-R1 achieves only 7.5% accuracy on our task. Further analysis also reveals that the accuracy exhibits a linear decline with respect to the logarithm of the expected number of reasoning steps. Our code and data is available at https://github.com/LongReasonArena/LongReasonArena.

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language