Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection
By: Chaowei Zhang , Zongling Feng , Zewei Zhang and more
Potential Business Impact:
Helps computers spot fake news by finding wrong ideas.
The questionable responses caused by knowledge hallucination may lead to LLMs' unstable ability in decision-making. However, it has never been investigated whether the LLMs' hallucination is possibly usable to generate negative reasoning for facilitating the detection of fake news. This study proposes a novel supervised self-reinforced reasoning rectification approach - SR$^3$ that yields both common reasonable reasoning and wrong understandings (negative reasoning) for news via LLMs reflection for semantic consistency learning. Upon that, we construct a negative reasoning-based news learning model called - \emph{NRFE}, which leverages positive or negative news-reasoning pairs for learning the semantic consistency between them. To avoid the impact of label-implicated reasoning, we deploy a student model - \emph{NRFE-D} that only takes news content as input to inspect the performance of our method by distilling the knowledge from \emph{NRFE}. The experimental results verified on three popular fake news datasets demonstrate the superiority of our method compared with three kinds of baselines including prompting on LLMs, fine-tuning on pre-trained SLMs, and other representative fake news detection methods.
Similar Papers
Neutralizing Bias in LLM Reasoning using Entailment Graphs
Computation and Language
Makes AI understand stories better, not just memorize.
Thinking, Faithful and Stable: Mitigating Hallucinations in LLMs
Artificial Intelligence
Makes AI think more carefully and be more truthful.
Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?
Computation and Language
Helps computers tell if their answers are true.