Enhancing LLM Reasoning via Non-Human-Like Reasoning Path Preference Optimization
By: Junjie Lu , Yuliang Liu , Chaofeng Qu and more
Potential Business Impact:
Teaches computers to think better, even if it's weird.
Current approaches for strengthening LLM reasoning tend to introduce a training bias toward human-like reasoning trajectories. In step-wise preference optimization, in particular, dependence on human or higher-capacity model annotations for intermediate steps limits exploration of alternative, non-human-like reasoning paths and thus constrains achievable performance. Furthermore, through a small-scale pilot study, we observed that in approximately 75% of cases, the model's first erroneous step occurs after the lowest-confidence point. This suggests that guiding the model at its lowest-confidence point before an error provides more accurate supervision than locating the first explicit error. In this paper, we propose Confidence-Guided Reasoning Path Preference Optimization (CGPO), a method that leverages a confidence signal to identify points of maximal uncertainty in the model's reasoning process and applies self-generated, non-human-like reasoning-path guidance to mitigate trajectory drift. Our experiments span diverse models applied to both code and mathematical reasoning tasks. The results show that, with the same amount of training data, our method using data generated by a small model can achieve better performance in most cases compared with approaches using data generated by a strong model or human-annotated.
Similar Papers
Self-Training Large Language Models with Confident Reasoning
Machine Learning (CS)
Teaches computers to think better, not just guess.
GPO: Learning from Critical Steps to Improve LLM Reasoning
Artificial Intelligence
Teaches computers to think better step-by-step.
Pruning Long Chain-of-Thought of Large Reasoning Models via Small-Scale Preference Optimization
Artificial Intelligence
Makes smart computers think faster, shorter answers.