Self-Training Large Language Models with Confident Reasoning
By: Hyosoon Jang , Yunhui Jang , Sungjae Lee and more
Potential Business Impact:
Teaches computers to think better, not just guess.
Large language models (LLMs) have shown impressive performance by generating reasoning paths before final answers, but learning such a reasoning path requires costly human supervision. To address this issue, recent studies have explored self-training methods that improve reasoning capabilities using pseudo-labels generated by the LLMs themselves. Among these, confidence-based self-training fine-tunes LLMs to prefer reasoning paths with high-confidence answers, where confidence is estimated via majority voting. However, such methods exclusively focus on the quality of the final answer and may ignore the quality of the reasoning paths, as even an incorrect reasoning path leads to a correct answer by chance. Instead, we advocate the use of reasoning-level confidence to identify high-quality reasoning paths for self-training, supported by our empirical observations. We then propose a new self-training method, CORE-PO, that fine-tunes LLMs to prefer high-COnfidence REasoning paths through Policy Optimization. Our experiments show that CORE-PO improves the accuracy of outputs on four in-distribution and two out-of-distribution benchmarks, compared to existing self-training methods.
Similar Papers
Self-Training Elicits Concise Reasoning in Large Language Models
Computation and Language
Makes AI think smarter with fewer words.
Reflective Confidence: Correcting Reasoning Flaws via Online Self-Correction
Artificial Intelligence
Helps AI fix its own thinking mistakes.
Enhancing LLM Reasoning via Non-Human-Like Reasoning Path Preference Optimization
Computation and Language
Teaches computers to think better, even if it's weird.