DyGLNet: Hybrid Global-Local Feature Fusion with Dynamic Upsampling for Medical Image Segmentation
By: Yican Zhao , Ce Wang , You Hao and more
Potential Business Impact:
Finds tiny sickness spots in body scans better.
Medical image segmentation grapples with challenges including multi-scale lesion variability, ill-defined tissue boundaries, and computationally intensive processing demands. This paper proposes the DyGLNet, which achieves efficient and accurate segmentation by fusing global and local features with a dynamic upsampling mechanism. The model innovatively designs a hybrid feature extraction module (SHDCBlock), combining single-head self-attention and multi-scale dilated convolutions to model local details and global context collaboratively. We further introduce a dynamic adaptive upsampling module (DyFusionUp) to realize high-fidelity reconstruction of feature maps based on learnable offsets. Then, a lightweight design is adopted to reduce computational overhead. Experiments on seven public datasets demonstrate that DyGLNet outperforms existing methods, particularly excelling in boundary accuracy and small-object segmentation. Meanwhile, it exhibits lower computation complexity, enabling an efficient and reliable solution for clinical medical image analysis. The code will be made available soon.
Similar Papers
LGMSNet: Thinning a medical image segmentation model via dual-level multiscale fusion
CV and Pattern Recognition
Helps doctors see diseases in scans better.
UAGLNet: Uncertainty-Aggregated Global-Local Fusion Network with Cooperative CNN-Transformer for Building Extraction
CV and Pattern Recognition
Finds buildings in pictures better.
Dual-Stage Global and Local Feature Framework for Image Dehazing
CV and Pattern Recognition
Clears fog from big, detailed pictures.