Mitigating Prompt-Induced Hallucinations in Large Language Models via Structured Reasoning
By: Jinbo Hao , Kai Yang , Qingzhen Su and more
Potential Business Impact:
Makes AI tell the truth, not make things up.
To address hallucination issues in large language models (LLMs), this paper proposes a method for mitigating prompt-induced hallucinations. Building on a knowledge distillation chain-style model, we introduce a code module to guide knowledge-graph exploration and incorporate code as part of the chain-of-thought prompt, forming an external knowledge input that provides more accurate and structured information to the model. Based on this design, we develop an improved knowledge distillation chain-style model and leverage it to analyze and constrain the reasoning process of LLMs, thereby improving inference accuracy. We empirically evaluate the proposed approach using GPT-4 and LLaMA-3.3 on multiple public datasets. Experimental results demonstrate that incorporating code modules significantly enhances the model's ability to capture contextual information and effectively mitigates prompt-induced hallucinations. Specifically, HIT@1, HIT@3, and HIT@5 improve by 15.64%, 13.38%, and 13.28%, respectively. Moreover, the proposed method achieves HIT@1, HIT@3, and HIT@5 scores exceeding 95% across several evaluation settings. These results indicate that the proposed approach substantially reduces hallucination behavior while improving the accuracy and verifiability of large language models.
Similar Papers
KDCM: Reducing Hallucination in LLMs through Explicit Reasoning Structures
Computation and Language
Makes AI answer questions more truthfully.
Multi-stage Prompt Refinement for Mitigating Hallucinations in Large Language Models
Computation and Language
Fixes computer mistakes to give better answers.
CPR: Mitigating Large Language Model Hallucinations with Curative Prompt Refinement
Computation and Language
Fixes computer answers to be more truthful.