Reflective Confidence: Correcting Reasoning Flaws via Online Self-Correction
By: Qinglin Zeng, Jing Yang, Keze Wang
Large language models (LLMs) have achieved strong performance on complex reasoning tasks using techniques such as chain-of-thought and self-consistency. However, ensemble-based approaches, especially self-consistency which relies on multiple reasoning trajectories, often incur substantial computational overhead. To improve efficiency, prior work has leveraged internal confidence signals, where early stopping strategies such as DeepConf reduce cost by terminating low-confidence trajectories. However, this strategy discards incomplete reasoning paths and wastes partial computation. We propose reflective confidence, a novel reasoning framework that transforms low-confidence signals from termination indicators into reflection triggers. When confidence falls below a threshold, instead of stopping generation, the model produces a reflection prompt to analyze the current reasoning state, identify potential errors, and continue generation along a corrected trajectory. Experiments on mathematical reasoning benchmarks, including AIME 2025, demonstrate significant accuracy improvements over advanced early-stopping baselines at comparable computational cost, validating the effectiveness of proactive self-correction over passive discarding.
Similar Papers
Deep Think with Confidence
Machine Learning (CS)
Makes smart computer answers better and faster.
ReflCtrl: Controlling LLM Reflection via Representation Engineering
Artificial Intelligence
Control AI's thinking to save energy.
Language Models can perform Single-Utterance Self-Correction of Perturbed Reasoning
Computation and Language
Computers can fix their own math mistakes.