Forward versus Backward: Comparing Reasoning Objectives in Direct Preference Optimization
By: Murtaza Nikzad, Raghuram Ramanujan
Potential Business Impact:
Makes AI smarter and less likely to lie.
Large language models exhibit impressive reasoning capabilities yet frequently generate plausible but incorrect solutions, a phenomenon commonly termed hallucination. This paper investigates the effect of training objective composition on reasoning reliability through Direct Preference Optimization. Two complementary training signals are examined: forward chain-of-thought generation, which trains the model to produce correct reasoning traces, and backward verification, which trains the model to verify and acknowledge errors in candidate solutions. Experiments on GSM8K reveal a fundamental trade-off between these objectives. Forward-only DPO training achieves the highest accuracy improvement, increasing from 83.1% to 86.6% (+3.5 percentage points), while backward-only training yields minimal accuracy gains but substantially reduces the false positive rate from 13.4% to 4.3%. Notably, both training variants reduce acknowledgement rate compared to the baseline, suggesting that preference optimization increases model confidence in its outputs. These findings indicate that forward and backward reasoning objectives provide distinct and complementary learning signals: forward training improves problem-solving capability, while backward training improves verification calibration. The complete training and evaluation pipeline, implemented efficiently through Low-Rank Adaptation, is released to facilitate further research.
Similar Papers
Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation
Computation and Language
Makes AI smarter with less computer power.
Evaluating GRPO and DPO for Faithful Chain-of-Thought Reasoning in LLMs
Computation and Language
Makes AI show its real thinking steps.
DA-DPO: Cost-efficient Difficulty-aware Preference Optimization for Reducing MLLM Hallucinations
Artificial Intelligence
Teaches AI to avoid making up fake answers.