Learning to Reason for Factuality
By: Xilun Chen , Ilia Kulikov , Vincent-Pierre Berges and more
Potential Business Impact:
Makes AI write true stories, not made-up ones.
Reasoning Large Language Models (R-LLMs) have significantly advanced complex reasoning tasks but often struggle with factuality, generating substantially more hallucinations than their non-reasoning counterparts on long-form factuality benchmarks. However, extending online Reinforcement Learning (RL), a key component in recent R-LLM advancements, to the long-form factuality setting poses several unique challenges due to the lack of reliable verification methods. Previous work has utilized automatic factuality evaluation frameworks such as FActScore to curate preference data in the offline RL setting, yet we find that directly leveraging such methods as the reward in online RL leads to reward hacking in multiple ways, such as producing less detailed or relevant responses. We propose a novel reward function that simultaneously considers the factual precision, response detail level, and answer relevance, and applies online RL to learn high quality factual reasoning. Evaluated on six long-form factuality benchmarks, our factual reasoning model achieves an average reduction of 23.1 percentage points in hallucination rate, a 23% increase in answer detail level, and no degradation in the overall response helpfulness.
Similar Papers
Hallucination to Truth: A Review of Fact-Checking and Factuality Evaluation in Large Language Models
Computation and Language
Makes AI tell the truth, not lies.
Scaling Reasoning can Improve Factuality in Large Language Models
Computation and Language
Makes computers answer questions more accurately.
VeriFact: Enhancing Long-Form Factuality Evaluation with Refined Fact Extraction and Reference Facts
Computation and Language
Checks if AI's long answers are true.