Score: 1

Beyond Correctness: Rewarding Faithful Reasoning in Retrieval-Augmented Generation

Published: October 15, 2025 | arXiv ID: 2510.13272v1

By: Zhichao Xu , Zongyu Wu , Yun Zhou and more

BigTech Affiliations: Amazon

Potential Business Impact:

Makes AI's thinking steps more honest.

Business Areas:
Semantic Search Internet Services

Inspired by the success of reinforcement learning (RL) in Large Language Model (LLM) training for domains like math and code, recent works have begun exploring how to train LLMs to use search engines more effectively as tools for retrieval-augmented generation. Although these methods achieve performance improvement across QA benchmarks, many prioritize final answer correctness while overlooking the quality of intermediate reasoning steps, which may lead to chain-of-thought unfaithfulness. In this paper, we first introduce a comprehensive evaluation framework for evaluating RL-based search agents, covering three distinct faithfulness metrics: information-think faithfulness, think-answer faithfulness, and think-search faithfulness. Our evaluations reveal that a prototypical RL-based search agent, Search-R1, has significant room for improvement in this regard. To foster faithful reasoning, we introduce VERITAS (Verifying Entailed Reasoning through Intermediate Traceability in Agentic Search), a novel framework that integrates fine-grained faithfulness rewards into the reinforcement learning process. Our experiments show that models trained with VERITAS not only significantly improve reasoning faithfulness, but also achieve comparable task performance across seven QA benchmarks.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Computation and Language