Score: 0

Teaching LLM to Reason: Reinforcement Learning from Algorithmic Problems without Code

Published: July 10, 2025 | arXiv ID: 2507.07498v2

By: Keqin Bao , Nuo Chen , Xiaoyuan Li and more

Potential Business Impact:

Teaches computers to think better using code.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Enhancing reasoning capabilities remains a central focus in the LLM reasearch community. A promising direction involves requiring models to simulate code execution step-by-step to derive outputs for given inputs. However, as code is often designed for large-scale systems, direct application leads to over-reliance on complex data structures and algorithms, even for simple cases, resulting in overfitting to algorithmic patterns rather than core reasoning structures. To address this, we propose TeaR, which aims at teaching LLMs to reason better. TeaR leverages careful data curation and reinforcement learning to guide models in discovering optimal reasoning paths through code-related tasks, thereby improving general reasoning abilities. We conduct extensive experiments using two base models and three long-CoT distillation models, with model sizes ranging from 1.5 billion to 32 billion parameters, and across 17 benchmarks spanning Math, Knowledge, Code, and Logical Reasoning. The results consistently show significant performance improvements. Notably, TeaR achieves a 35.9% improvement on Qwen2.5-7B and 5.9% on R1-Distilled-7B.

Page Count
15 pages

Category
Computer Science:
Computation and Language