MRGAgents: A Multi-Agent Framework for Improved Medical Report Generation with Med-LVLMs
By: Pengyu Wang , Shuchang Ye , Usman Naseem and more
Potential Business Impact:
Helps doctors find hidden problems in X-rays.
Medical Large Vision-Language Models (Med-LVLMs) have been widely adopted for medical report generation. Despite Med-LVLMs producing state-of-the-art performance, they exhibit a bias toward predicting all findings as normal, leading to reports that overlook critical abnormalities. Furthermore, these models often fail to provide comprehensive descriptions of radiologically relevant regions necessary for accurate diagnosis. To address these challenges, we proposeMedical Report Generation Agents (MRGAgents), a novel multi-agent framework that fine-tunes specialized agents for different disease categories. By curating subsets of the IU X-ray and MIMIC-CXR datasets to train disease-specific agents, MRGAgents generates reports that more effectively balance normal and abnormal findings while ensuring a comprehensive description of clinically relevant regions. Our experiments demonstrate that MRGAgents outperformed the state-of-the-art, improving both report comprehensiveness and diagnostic utility.
Similar Papers
A Multimodal Multi-Agent Framework for Radiology Report Generation
Artificial Intelligence
Helps doctors write faster, more accurate patient reports.
Medical AI Consensus: A Multi-Agent Framework for Radiology Report Generation and Evaluation
Artificial Intelligence
Helps doctors write patient reports faster.
MRG-R1: Reinforcement Learning for Clinically Aligned Medical Report Generation
Computation and Language
Makes AI write correct medical reports from scans.