Score: 0

Do Not Step Into the Same River Twice: Learning to Reason from Trial and Error

Published: October 30, 2025 | arXiv ID: 2510.26109v1

By: Chenming Tang , Hsiu-Yuan Huang , Weijie Liu and more

Potential Business Impact:

Teaches computers to learn better from mistakes.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Reinforcement learning with verifiable rewards (RLVR) has significantly boosted the reasoning capability of large language models (LLMs) recently. However, existing RLVR approaches merely train LLMs based on their own generated responses and are constrained by the initial capability of LLMs, thus prone to exploration stagnation, in which LLMs fail to solve more training problems and cannot further learn from the training data. Some work tries to address this by leveraging off-policy solutions to training problems but requires external guidance from experts which suffers from limited availability. In this work, we propose LTE (Learning to reason from Trial and Error), an approach hinting LLMs with their previously self-generated incorrect answers and problem of overlong responses, which does not require any external expert guidance. Experiments validate the effectiveness of LTE, which outperforms the normal group relative policy optimization (GRPO) by 6.38 in Pass@1 and 9.00 in Pass@k on average across six mathematics benchmarks for Qwen3-4B-Base. Further analysis confirms that LTE successfully mitigates the problem of exploration stagnation and enhances both exploitation and exploration during training.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)