Transformer-Progressive Mamba Network for Lightweight Image Super-Resolution
By: Sichen Guo , Wenjie Li , Yuanyang Liu and more
Potential Business Impact:
Makes pictures clearer with less computer power.
Recently, Mamba-based super-resolution (SR) methods have demonstrated the ability to capture global receptive fields with linear complexity, addressing the quadratic computational cost of Transformer-based SR approaches. However, existing Mamba-based methods lack fine-grained transitions across different modeling scales, which limits the efficiency of feature representation. In this paper, we propose T-PMambaSR, a lightweight SR framework that integrates window-based self-attention with Progressive Mamba. By enabling interactions among receptive fields of different scales, our method establishes a fine-grained modeling paradigm that progressively enhances feature representation with linear complexity. Furthermore, we introduce an Adaptive High-Frequency Refinement Module (AHFRM) to recover high-frequency details lost during Transformer and Mamba processing. Extensive experiments demonstrate that T-PMambaSR progressively enhances the model's receptive field and expressiveness, yielding better performance than recent Transformer- or Mamba-based methods while incurring lower computational cost. Our codes will be released after acceptance.
Similar Papers
Efficient Vision Mamba for MRI Super-Resolution via Hybrid Selective Scanning
CV and Pattern Recognition
Makes MRI scans clearer and faster for doctors.
Exploring Non-Local Spatial-Angular Correlations with a Hybrid Mamba-Transformer Framework for Light Field Super-Resolution
CV and Pattern Recognition
Makes blurry pictures sharper using smart computer tricks.
Versatile and Efficient Medical Image Super-Resolution Via Frequency-Gated Mamba
CV and Pattern Recognition
Makes blurry medical pictures sharp for doctors.