ART: Adaptive Reasoning Trees for Explainable Claim Verification
By: Sahil Wadhwa , Himanshu Kumar , Guanqun Yang and more
Potential Business Impact:
Helps AI explain its answers so we can trust it.
Large Language Models (LLMs) are powerful candidates for complex decision-making, leveraging vast encoded knowledge and remarkable zero-shot abilities. However, their adoption in high-stakes environments is hindered by their opacity; their outputs lack faithful explanations and cannot be effectively contested to correct errors, undermining trustworthiness. In this paper, we propose ART (Adaptive Reasoning Trees), a hierarchical method for claim verification. The process begins with a root claim, which branches into supporting and attacking child arguments. An argument's strength is determined bottom-up via a pairwise tournament of its children, adjudicated by a judge LLM, allowing a final, transparent and contestable verdict to be systematically derived which is missing in methods like Chain-of-Thought (CoT). We empirically validate ART on multiple datasets, analyzing different argument generators and comparison strategies. Our findings show that ART's structured reasoning outperforms strong baselines, establishing a new benchmark for explainable claim verification which is more reliable and ensures clarity in the overall decision making step.
Similar Papers
ART: Adaptive Response Tuning Framework -- A Multi-Agent Tournament-Based Approach to LLM Response Optimization
Computation and Language
Makes AI give better answers by having them compete.
Tree-of-Reasoning: Towards Complex Medical Diagnosis via Multi-Agent Reasoning with Evidence Tree
Artificial Intelligence
Helps doctors diagnose illnesses better with smarter AI.
A Novel Architecture for Symbolic Reasoning with Decision Trees and LLM Agents
Artificial Intelligence
Makes AI understand and solve problems better.