SaMam: Style-aware State Space Model for Arbitrary Image Style Transfer
By: Hongda Liu , Longguang Wang , Ye Zhang and more
Potential Business Impact:
Makes art look better with faster computer brains.
Global effective receptive field plays a crucial role for image style transfer (ST) to obtain high-quality stylized results. However, existing ST backbones (e.g., CNNs and Transformers) suffer huge computational complexity to achieve global receptive fields. Recently, the State Space Model (SSM), especially the improved variant Mamba, has shown great potential for long-range dependency modeling with linear complexity, which offers a approach to resolve the above dilemma. In this paper, we develop a Mamba-based style transfer framework, termed SaMam. Specifically, a mamba encoder is designed to efficiently extract content and style information. In addition, a style-aware mamba decoder is developed to flexibly adapt to various styles. Moreover, to address the problems of local pixel forgetting, channel redundancy and spatial discontinuity of existing SSMs, we introduce both local enhancement and zigzag scan. Qualitative and quantitative results demonstrate that our SaMam outperforms state-of-the-art methods in terms of both accuracy and efficiency.
Similar Papers
MambaStyle: Efficient StyleGAN Inversion for Real Image Editing with State-Space Models
Image and Video Processing
Changes pictures to look like other styles faster.
CMamba: Learned Image Compression with State Space Models
Image and Video Processing
Makes pictures smaller without losing quality.
DefMamba: Deformable Visual State Space Model
CV and Pattern Recognition
Finds important parts of pictures better.