FluencyVE: Marrying Temporal-Aware Mamba with Bypass Attention for Video Editing
By: Mingshu Cai , Yixuan Li , Osamu Yoshie and more
Large-scale text-to-image diffusion models have achieved unprecedented success in image generation and editing. However, extending this success to video editing remains challenging. Recent video editing efforts have adapted pretrained text-to-image models by adding temporal attention mechanisms to handle video tasks. Unfortunately, these methods continue to suffer from temporal inconsistency issues and high computational overheads. In this study, we propose FluencyVE, which is a simple yet effective one-shot video editing approach. FluencyVE integrates the linear time-series module, Mamba, into a video editing model based on pretrained Stable Diffusion models, replacing the temporal attention layer. This enables global frame-level attention while reducing the computational costs. In addition, we employ low-rank approximation matrices to replace the query and key weight matrices in the causal attention, and use a weighted averaging technique during training to update the attention scores. This approach significantly preserves the generative power of the text-to-image model while effectively reducing the computational burden. Experiments and analyses demonstrate promising results in editing various attributes, subjects, and locations in real-world videos.
Similar Papers
M4V: Multi-Modal Mamba for Text-to-Video Generation
CV and Pattern Recognition
Makes videos from words much faster.
TextMamba: Scene Text Detector with Mamba
CV and Pattern Recognition
Helps computers find words in messy pictures.
TimeViper: A Hybrid Mamba-Transformer Vision-Language Model for Efficient Long Video Understanding
CV and Pattern Recognition
Lets computers watch and understand hours of video.