Thinking, Faithful and Stable: Mitigating Hallucinations in LLMs
By: Chelsea Zou, Yiheng Yao, Basant Khalil
Potential Business Impact:
Makes AI think more carefully and be more truthful.
This project develops a self correcting framework for large language models (LLMs) that detects and mitigates hallucinations during multi-step reasoning. Rather than relying solely on final answer correctness, our approach leverages fine grained uncertainty signals: 1) self-assessed confidence alignment, and 2) token-level entropy spikes to detect unreliable and unfaithful reasoning in real time. We design a composite reward function that penalizes unjustified high confidence and entropy spikes, while encouraging stable and accurate reasoning trajectories. These signals guide a reinforcement learning (RL) policy that makes the model more introspective and shapes the model's generation behavior through confidence-aware reward feedback, improving not just outcome correctness but the coherence and faithfulness of their intermediate reasoning steps. Experiments show that our method improves both final answer accuracy and reasoning calibration, with ablations validating the individual contribution of each signal.
Similar Papers
Multi-Modal Fact-Verification Framework for Reducing Hallucinations in Large Language Models
Artificial Intelligence
Fixes AI lies to make it more truthful.
Diagnosing Hallucination Risk in AI Surgical Decision-Support: A Sequential Framework for Sequential Validation
Machine Learning (CS)
Tests AI to make sure it's safe for surgery.
Mathematical Analysis of Hallucination Dynamics in Large Language Models: Uncertainty Quantification, Advanced Decoding, and Principled Mitigation
Computation and Language
Makes AI tell the truth, not make things up.