MambaIC: State Space Models for High-Performance Learned Image Compression
By: Fanhu Zeng , Hao Tang , Yihua Shao and more
Potential Business Impact:
Makes pictures smaller for faster sharing.
A high-performance image compression algorithm is crucial for real-time information transmission across numerous fields. Despite rapid progress in image compression, computational inefficiency and poor redundancy modeling still pose significant bottlenecks, limiting practical applications. Inspired by the effectiveness of state space models (SSMs) in capturing long-range dependencies, we leverage SSMs to address computational inefficiency in existing methods and improve image compression from multiple perspectives. In this paper, we integrate the advantages of SSMs for better efficiency-performance trade-off and propose an enhanced image compression approach through refined context modeling, which we term MambaIC. Specifically, we explore context modeling to adaptively refine the representation of hidden states. Additionally, we introduce window-based local attention into channel-spatial entropy modeling to reduce potential spatial redundancy during compression, thereby increasing efficiency. Comprehensive qualitative and quantitative results validate the effectiveness and efficiency of our approach, particularly for high-resolution image compression. Code is released at https://github.com/AuroraZengfh/MambaIC.
Similar Papers
MambaMia: A State-Space-Model-Based Compression for Efficient Video Understanding in Large Multimodal Models
CV and Pattern Recognition
Makes computers understand long videos faster.
CMIC: Content-Adaptive Mamba for Learned Image Compression
CV and Pattern Recognition
Makes pictures smaller without losing quality.
CMIC: Content-Adaptive Mamba for Learned Image Compression
CV and Pattern Recognition
Makes pictures smaller without losing quality.