A Medical Multimodal Diagnostic Framework Integrating Vision-Language Models and Logic Tree Reasoning
By: Zelin Zang , Wenyi Gu , Siqi Ma and more
With the rapid growth of large language models (LLMs) and vision-language models (VLMs) in medicine, simply integrating clinical text and medical imaging does not guarantee reliable reasoning. Existing multimodal models often produce hallucinations or inconsistent chains of thought, limiting clinical trust. We propose a diagnostic framework built upon LLaVA that combines vision-language alignment with logic-regularized reasoning. The system includes an input encoder for text and images, a projection module for cross-modal alignment, a reasoning controller that decomposes diagnostic tasks into steps, and a logic tree generator that assembles stepwise premises into verifiable conclusions. Evaluations on MedXpertQA and other benchmarks show that our method improves diagnostic accuracy and yields more interpretable reasoning traces on multimodal tasks, while remaining competitive on text-only settings. These results suggest a promising step toward trustworthy multimodal medical AI.
Similar Papers
Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis
CV and Pattern Recognition
Helps doctors understand cancer treatment images better.
Diagnosing Visual Reasoning: Challenges, Insights, and a Path Forward
CV and Pattern Recognition
Fixes AI seeing things that aren't there.
Why Text Prevails: Vision May Undermine Multimodal Medical Decision Making
CV and Pattern Recognition
Helps doctors diagnose diseases from medical images.