Learning to Reason for Hallucination Span Detection
By: Hsuan Su , Ting-Yao Hu , Hema Swetha Koppula and more
Potential Business Impact:
Teaches computers to spot fake facts in writing.
Large language models (LLMs) often generate hallucinations -- unsupported content that undermines reliability. While most prior works frame hallucination detection as a binary task, many real-world applications require identifying hallucinated spans, which is a multi-step decision making process. This naturally raises the question of whether explicit reasoning can help the complex task of detecting hallucination spans. To answer this question, we first evaluate pretrained models with and without Chain-of-Thought (CoT) reasoning, and show that CoT reasoning has the potential to generate at least one correct answer when sampled multiple times. Motivated by this, we propose RL4HS, a reinforcement learning framework that incentivizes reasoning with a span-level reward function. RL4HS builds on Group Relative Policy Optimization and introduces Class-Aware Policy Optimization to mitigate reward imbalance issue. Experiments on the RAGTruth benchmark (summarization, question answering, data-to-text) show that RL4HS surpasses pretrained reasoning models and supervised fine-tuning, demonstrating the necessity of reinforcement learning with span-level rewards for detecting hallucination spans.
Similar Papers
Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification
Artificial Intelligence
Makes AI tell the truth, not make things up.
Reasoning Large Language Model Errors Arise from Hallucinating Critical Problem Features
Machine Learning (CS)
AI models invent fake connections in problems.
Mitigating Hallucinations in Large Language Models via Causal Reasoning
Computation and Language
Teaches computers to think logically, reducing fake answers.