RadAgents: Multimodal Agentic Reasoning for Chest X-ray Interpretation with Radiologist-like Workflows
By: Kai Zhang , Corey D Barrett , Jangwon Kim and more
Potential Business Impact:
Helps doctors read X-rays better and more safely.
Agentic systems offer a potential path to solve complex clinical tasks through collaboration among specialized agents, augmented by tool use and external knowledge bases. Nevertheless, for chest X-ray (CXR) interpretation, prevailing methods remain limited: (i) reasoning is frequently neither clinically interpretable nor aligned with guidelines, reflecting mere aggregation of tool outputs; (ii) multimodal evidence is insufficiently fused, yielding text-only rationales that are not visually grounded; and (iii) systems rarely detect or resolve cross-tool inconsistencies and provide no principled verification mechanisms. To bridge the above gaps, we present RadAgents, a multi-agent framework for CXR interpretation that couples clinical priors with task-aware multimodal reasoning. In addition, we integrate grounding and multimodal retrieval-augmentation to verify and resolve context conflicts, resulting in outputs that are more reliable, transparent, and consistent with clinical practice.
Similar Papers
CXRAgent: Director-Orchestrated Multi-Stage Reasoning for Chest X-Ray Interpretation
Artificial Intelligence
Helps doctors read X-rays better by checking evidence.
RadFabric: Agentic AI System with Reasoning Capability for Radiology
CV and Pattern Recognition
Helps doctors find sickness on X-rays better.
MedRAX: Medical Reasoning Agent for Chest X-ray
Machine Learning (CS)
AI reads X-rays to answer doctor questions.