KDCM: Reducing Hallucination in LLMs through Explicit Reasoning Structures
By: Jinbo Hao , Kai Yang , Qingzhen Su and more
Potential Business Impact:
Makes AI answer questions more truthfully.
To mitigate hallucinations in large language models (LLMs), we propose a framework that focuses on errors induced by prompts. Our method extends a chain-style knowledge distillation approach by incorporating a programmable module that guides knowledge graph exploration. This module is embedded as executable code within the reasoning prompt, allowing the model to leverage external structured knowledge during inference. Based on this design, we develop an enhanced distillation-based reasoning framework that explicitly regulates intermediate reasoning steps, resulting in more reliable predictions. We evaluate the proposed approach on multiple public benchmarks using GPT-4 and LLaMA-3.3. Experimental results show that code-guided reasoning significantly improves contextual modeling and reduces prompt-induced hallucinations. Specifically, HIT@1, HIT@3, and HIT@5 increase by 15.64%, 13.38%, and 13.28%, respectively, with scores exceeding 95% across several evaluation settings. These findings indicate that the proposed method effectively constrains erroneous reasoning while improving both accuracy and interpretability.
Similar Papers
Mitigating Prompt-Induced Hallucinations in Large Language Models via Structured Reasoning
Computation and Language
Makes AI tell the truth, not make things up.
Mitigating Hallucinations in Large Language Models via Causal Reasoning
Computation and Language
Teaches computers to think logically, reducing fake answers.
Detection and Mitigation of Hallucination in Large Reasoning Models: A Mechanistic Perspective
Artificial Intelligence
Finds and fixes when smart computers make up answers.