UCAgents: Unidirectional Convergence for Visual Evidence Anchored Multi-Agent Medical Decision-Making
By: Qianhan Feng , Zhongzhen Huang , Yakun Zhu and more
Potential Business Impact:
Makes AI doctors explain diagnoses using real pictures.
Vision-Language Models (VLMs) show promise in medical diagnosis, yet suffer from reasoning detachment, where linguistically fluent explanations drift from verifiable image evidence, undermining clinical trust. Recent multi-agent frameworks simulate Multidisciplinary Team (MDT) debates to mitigate single-model bias, but open-ended discussions amplify textual noise and computational cost while failing to anchor reasoning to visual evidence, the cornerstone of medical decision-making. We propose UCAgents, a hierarchical multi-agent framework enforcing unidirectional convergence through structured evidence auditing. Inspired by clinical workflows, UCAgents forbids position changes and limits agent interactions to targeted evidence verification, suppressing rhetorical drift while amplifying visual signal extraction. In UCAgents, a one-round inquiry discussion is introduced to uncover potential risks of visual-textual misalignment. This design jointly constrains visual ambiguity and textual noise, a dual-noise bottleneck that we formalize via information theory. Extensive experiments on four medical VQA benchmarks show UCAgents achieves superior accuracy (71.3% on PathVQA, +6.0% over state-of-the-art) with 87.7% lower token cost, the evaluation results further confirm that UCAgents strikes a balance between uncovering more visual evidence and avoiding confusing textual interference. These results demonstrate that UCAgents exhibits both diagnostic reliability and computational efficiency critical for real-world clinical deployment. Code is available at https://github.com/fqhank/UCAgents.
Similar Papers
Med-VRAgent: A Framework for Medical Visual Reasoning-Enhanced Agents
Artificial Intelligence
Helps doctors understand medical images better.
Enhancing Agentic Autonomous Scientific Discovery with Vision-Language Model Capabilities
Computation and Language
Computers discover science by checking their own work.
A Medical Multimodal Diagnostic Framework Integrating Vision-Language Models and Logic Tree Reasoning
Artificial Intelligence
Helps AI doctors understand images and text better.