Score: 2

JustRL: Scaling a 1.5B LLM with a Simple RL Recipe

Published: December 18, 2025 | arXiv ID: 2512.16649v1

By: Bingxiang He , Zekai Qu , Zeyuan Liu and more

Potential Business Impact:

Makes smart computer programs learn better with less effort.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advances in reinforcement learning for large language models have converged on increasing complexity: multi-stage training pipelines, dynamic hyperparameter schedules, and curriculum learning strategies. This raises a fundamental question: \textbf{Is this complexity necessary?} We present \textbf{JustRL}, a minimal approach using single-stage training with fixed hyperparameters that achieves state-of-the-art performance on two 1.5B reasoning models (54.9\% and 64.3\% average accuracy across nine mathematical benchmarks) while using 2$\times$ less compute than sophisticated approaches. The same hyperparameters transfer across both models without tuning, and training exhibits smooth, monotonic improvement over 4,000+ steps without the collapses or plateaus that typically motivate interventions. Critically, ablations reveal that adding ``standard tricks'' like explicit length penalties and robust verifiers may degrade performance by collapsing exploration. These results suggest that the field may be adding complexity to solve problems that disappear with a stable, scaled-up baseline. We release our models and code to establish a simple, validated baseline for the community.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language