MedGemma vs GPT-4: Open-Source and Proprietary Zero-shot Medical Disease Classification from Images
By: Md. Sazzadul Islam Prottasha, Nabil Walid Rafi
Potential Business Impact:
AI finds diseases in scans better than GPT-4.
Multimodal Large Language Models (LLMs) introduce an emerging paradigm for medical imaging by interpreting scans through the lens of extensive clinical knowledge, offering a transformative approach to disease classification. This study presents a critical comparison between two fundamentally different AI architectures: the specialized open-source agent MedGemma and the proprietary large multimodal model GPT-4 for diagnosing six different diseases. The MedGemma-4b-it model, fine-tuned using Low-Rank Adaptation (LoRA), demonstrated superior diagnostic capability by achieving a mean test accuracy of 80.37% compared to 69.58% for the untuned GPT-4. Furthermore, MedGemma exhibited notably higher sensitivity in high-stakes clinical tasks, such as cancer and pneumonia detection. Quantitative analysis via confusion matrices and classification reports provides comprehensive insights into model performance across all categories. These results emphasize that domain-specific fine-tuning is essential for minimizing hallucinations in clinical implementation, positioning MedGemma as a sophisticated tool for complex, evidence-based medical reasoning.
Similar Papers
MedGemma Technical Report
Artificial Intelligence
Helps doctors understand medical images and notes better.
Beyond Diagnosis: Evaluating Multimodal LLMs for Pathology Localization in Chest Radiographs
CV and Pattern Recognition
AI can find sickness in X-rays.
Fine-Tuning MedGemma for Clinical Captioning to Enhance Multimodal RAG over Malaysia CPGs
Computation and Language
Helps doctors understand medical pictures better.