Wave-GMS: Lightweight Multi-Scale Generative Model for Medical Image Segmentation
By: Talha Ahmed, Nehal Ahmed Shaikh, Hassan Mohy-ud-Din
Potential Business Impact:
Helps doctors see tiny details in medical pictures.
For equitable deployment of AI tools in hospitals and healthcare facilities, we need Deep Segmentation Networks that offer high performance and can be trained on cost-effective GPUs with limited memory and large batch sizes. In this work, we propose Wave-GMS, a lightweight and efficient multi-scale generative model for medical image segmentation. Wave-GMS has a substantially smaller number of trainable parameters, does not require loading memory-intensive pretrained vision foundation models, and supports training with large batch sizes on GPUs with limited memory. We conducted extensive experiments on four publicly available datasets (BUS, BUSI, Kvasir-Instrument, and HAM10000), demonstrating that Wave-GMS achieves state-of-the-art segmentation performance with superior cross-domain generalizability, while requiring only ~2.6M trainable parameters. Code is available at https://github.com/ATPLab-LUMS/Wave-GMS.
Similar Papers
LGMSNet: Thinning a medical image segmentation model via dual-level multiscale fusion
CV and Pattern Recognition
Helps doctors see diseases in scans better.
GBT-SAM: Adapting a Foundational Deep Learning Model for Generalizable Brain Tumor Segmentation via Efficient Integration of Multi-Parametric MRI Data
Image and Video Processing
Helps doctors find brain tumors faster.
GaMNet: A Hybrid Network with Gabor Fusion and NMamba for Efficient 3D Glioma Segmentation
CV and Pattern Recognition
Helps doctors find brain tumors faster and better.