Zero-Shot Video Translation and Editing with Frame Spatial-Temporal Correspondence
By: Shuai Yang , Junxin Lin , Yifan Zhou and more
Potential Business Impact:
Makes videos look smooth and real, not jumpy.
The remarkable success in text-to-image diffusion models has motivated extensive investigation of their potential for video applications. Zero-shot techniques aim to adapt image diffusion models for videos without requiring further model training. Recent methods largely emphasize integrating inter-frame correspondence into attention mechanisms. However, the soft constraint applied to identify the valid features to attend is insufficient, which could lead to temporal inconsistency. In this paper, we present FRESCO, which integrates intra-frame correspondence with inter-frame correspondence to formulate a more robust spatial-temporal constraint. This enhancement ensures a consistent transformation of semantically similar content between frames. Our method goes beyond attention guidance to explicitly optimize features, achieving high spatial-temporal consistency with the input video, significantly enhancing the visual coherence of manipulated videos. We verify FRESCO adaptations on two zero-shot tasks of video-to-video translation and text-guided video editing. Comprehensive experiments demonstrate the effectiveness of our framework in generating high-quality, coherent videos, highlighting a significant advance over current zero-shot methods.
Similar Papers
Are Image-to-Video Models Good Zero-Shot Image Editors?
CV and Pattern Recognition
Changes pictures using spoken words.
Xiaoice: Training-Free Video Understanding via Self-Supervised Spatio-Temporal Clustering of Semantic Features
CV and Pattern Recognition
Makes computers understand videos without extra training.
Generative Editing in the Joint Vision-Language Space for Zero-Shot Composed Image Retrieval
CV and Pattern Recognition
Find images using text and a picture.