Score: 1

Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection

Published: March 12, 2025 | arXiv ID: 2503.09153v1

By: Chaowei Zhang , Zongling Feng , Zewei Zhang and more

Potential Business Impact:

Helps computers spot fake news by finding wrong ideas.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The questionable responses caused by knowledge hallucination may lead to LLMs' unstable ability in decision-making. However, it has never been investigated whether the LLMs' hallucination is possibly usable to generate negative reasoning for facilitating the detection of fake news. This study proposes a novel supervised self-reinforced reasoning rectification approach - SR$^3$ that yields both common reasonable reasoning and wrong understandings (negative reasoning) for news via LLMs reflection for semantic consistency learning. Upon that, we construct a negative reasoning-based news learning model called - \emph{NRFE}, which leverages positive or negative news-reasoning pairs for learning the semantic consistency between them. To avoid the impact of label-implicated reasoning, we deploy a student model - \emph{NRFE-D} that only takes news content as input to inspect the performance of our method by distilling the knowledge from \emph{NRFE}. The experimental results verified on three popular fake news datasets demonstrate the superiority of our method compared with three kinds of baselines including prompting on LLMs, fine-tuning on pre-trained SLMs, and other representative fake news detection methods.

Country of Origin
🇨🇳 🇭🇰 China, Hong Kong

Page Count
9 pages

Category
Computer Science:
Computation and Language