V2Edit: Versatile Video Diffusion Editor for Videos and 3D Scenes
By: Yanming Zhang , Jun-Kun Chen , Jipeng Lyu and more
Potential Business Impact:
Changes videos and 3D worlds with text instructions.
This paper introduces V$^2$Edit, a novel training-free framework for instruction-guided video and 3D scene editing. Addressing the critical challenge of balancing original content preservation with editing task fulfillment, our approach employs a progressive strategy that decomposes complex editing tasks into a sequence of simpler subtasks. Each subtask is controlled through three key synergistic mechanisms: the initial noise, noise added at each denoising step, and cross-attention maps between text prompts and video content. This ensures robust preservation of original video elements while effectively applying the desired edits. Beyond its native video editing capability, we extend V$^2$Edit to 3D scene editing via a "render-edit-reconstruct" process, enabling high-quality, 3D-consistent edits even for tasks involving substantial geometric changes such as object insertion. Extensive experiments demonstrate that our V$^2$Edit achieves high-quality and successful edits across various challenging video editing tasks and complex 3D scene editing tasks, thereby establishing state-of-the-art performance in both domains.
Similar Papers
EasyV2V: A High-quality Instruction-based Video Editing Framework
CV and Pattern Recognition
Changes videos with simple text instructions.
S$^2$Edit: Text-Guided Image Editing with Precise Semantic and Spatial Control
CV and Pattern Recognition
Changes faces in pictures without losing identity.
OmniV2V: Versatile Video Generation and Editing via Dynamic Content Manipulation
CV and Pattern Recognition
Edits and makes videos from text and pictures.