Score: 0

Enhancing Factual Accuracy and Citation Generation in LLMs via Multi-Stage Self-Verification

Published: September 6, 2025 | arXiv ID: 2509.05741v1

By: Fernando Gabriela García, Qiyang Shi, Zilin Feng

Potential Business Impact:

Makes AI tell the truth and show proof.

Business Areas:
Semantic Search Internet Services

This research introduces VeriFact-CoT (Verified Factual Chain-of-Thought), a novel method designed to address the pervasive issues of hallucination and the absence of credible citation sources in Large Language Models (LLMs) when generating complex, fact-sensitive content. By incorporating a multi-stage mechanism of 'fact verification-reflection-citation integration,' VeriFact-CoT empowers LLMs to critically self-examine and revise their intermediate reasoning steps and final answers. This process significantly enhances the objective accuracy, trustworthiness, and traceability of the generated outputs, making LLMs more reliable for applications demanding high fidelity such as scientific research, news reporting, and legal consultation.

Page Count
16 pages

Category
Computer Science:
Computation and Language