Score: 0

Learning to Seek Evidence: A Verifiable Reasoning Agent with Causal Faithfulness Analysis

Published: November 3, 2025 | arXiv ID: 2511.01425v1

By: Yuhang Huang , Zekai Lin , Fan Zhong and more

Potential Business Impact:

AI explains medical guesses using proof.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Explanations for AI models in high-stakes domains like medicine often lack verifiability, which can hinder trust. To address this, we propose an interactive agent that produces explanations through an auditable sequence of actions. The agent learns a policy to strategically seek external visual evidence to support its diagnostic reasoning. This policy is optimized using reinforcement learning, resulting in a model that is both efficient and generalizable. Our experiments show that this action-based reasoning process significantly improves calibrated accuracy, reducing the Brier score by 18\% compared to a non-interactive baseline. To validate the faithfulness of the agent's explanations, we introduce a causal intervention method. By masking the visual evidence the agent chooses to use, we observe a measurable degradation in its performance ($\Delta$Brier=+0.029), confirming that the evidence is integral to its decision-making process. Our work provides a practical framework for building AI systems with verifiable and faithful reasoning capabilities.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Artificial Intelligence