Learning to Seek Evidence: A Verifiable Reasoning Agent with Causal Faithfulness Analysis
By: Yuhang Huang , Zekai Lin , Fan Zhong and more
Potential Business Impact:
AI explains medical guesses using proof.
Explanations for AI models in high-stakes domains like medicine often lack verifiability, which can hinder trust. To address this, we propose an interactive agent that produces explanations through an auditable sequence of actions. The agent learns a policy to strategically seek external visual evidence to support its diagnostic reasoning. This policy is optimized using reinforcement learning, resulting in a model that is both efficient and generalizable. Our experiments show that this action-based reasoning process significantly improves calibrated accuracy, reducing the Brier score by 18\% compared to a non-interactive baseline. To validate the faithfulness of the agent's explanations, we introduce a causal intervention method. By masking the visual evidence the agent chooses to use, we observe a measurable degradation in its performance ($\Delta$Brier=+0.029), confirming that the evidence is integral to its decision-making process. Our work provides a practical framework for building AI systems with verifiable and faithful reasoning capabilities.
Similar Papers
Beyond Correctness: Rewarding Faithful Reasoning in Retrieval-Augmented Generation
Computation and Language
Makes AI's thinking steps more honest.
Causal-Enhanced AI Agents for Medical Research Screening
Artificial Intelligence
AI accurately finds medical facts, no mistakes.
Project Ariadne: A Structural Causal Framework for Auditing Faithfulness in LLM Agents
Artificial Intelligence
Finds if AI's thinking matches its answers.