Score: 2

Enhancing LLM Reasoning via Non-Human-Like Reasoning Path Preference Optimization

Published: October 13, 2025 | arXiv ID: 2510.11104v1

By: Junjie Lu , Yuliang Liu , Chaofeng Qu and more

Potential Business Impact:

Teaches computers to think better, even if it's weird.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Current approaches for strengthening LLM reasoning tend to introduce a training bias toward human-like reasoning trajectories. In step-wise preference optimization, in particular, dependence on human or higher-capacity model annotations for intermediate steps limits exploration of alternative, non-human-like reasoning paths and thus constrains achievable performance. Furthermore, through a small-scale pilot study, we observed that in approximately 75% of cases, the model's first erroneous step occurs after the lowest-confidence point. This suggests that guiding the model at its lowest-confidence point before an error provides more accurate supervision than locating the first explicit error. In this paper, we propose Confidence-Guided Reasoning Path Preference Optimization (CGPO), a method that leverages a confidence signal to identify points of maximal uncertainty in the model's reasoning process and applies self-generated, non-human-like reasoning-path guidance to mitigate trajectory drift. Our experiments span diverse models applied to both code and mathematical reasoning tasks. The results show that, with the same amount of training data, our method using data generated by a small model can achieve better performance in most cases compared with approaches using data generated by a strong model or human-annotated.

Country of Origin
🇦🇺 Australia


Page Count
13 pages

Category
Computer Science:
Computation and Language