MambaStyle: Efficient StyleGAN Inversion for Real Image Editing with State-Space Models
By: Jhon Lopez , Carlos Hinojosa , Henry Arguello and more
Potential Business Impact:
Changes pictures to look like other styles faster.
The task of inverting real images into StyleGAN's latent space to manipulate their attributes has been extensively studied. However, existing GAN inversion methods struggle to balance high reconstruction quality, effective editability, and computational efficiency. In this paper, we introduce MambaStyle, an efficient single-stage encoder-based approach for GAN inversion and editing that leverages vision state-space models (VSSMs) to address these challenges. Specifically, our approach integrates VSSMs within the proposed architecture, enabling high-quality image inversion and flexible editing with significantly fewer parameters and reduced computational complexity compared to state-of-the-art methods. Extensive experiments show that MambaStyle achieves a superior balance among inversion accuracy, editing quality, and computational efficiency. Notably, our method achieves superior inversion and editing results with reduced model complexity and faster inference, making it suitable for real-time applications.
Similar Papers
SaMam: Style-aware State Space Model for Arbitrary Image Style Transfer
CV and Pattern Recognition
Makes art look better with faster computer brains.
DefMamba: Deformable Visual State Space Model
CV and Pattern Recognition
Finds important parts of pictures better.
MambaIC: State Space Models for High-Performance Learned Image Compression
CV and Pattern Recognition
Makes pictures smaller for faster sharing.