IC-Effect: Precise and Efficient Video Effects Editing via In-Context Learning
By: Yuanhang Li , Yiren Song , Junzhe Bai and more
Potential Business Impact:
Adds cool effects to videos without changing the background.
We propose \textbf{IC-Effect}, an instruction-guided, DiT-based framework for few-shot video VFX editing that synthesizes complex effects (\eg flames, particles and cartoon characters) while strictly preserving spatial and temporal consistency. Video VFX editing is highly challenging because injected effects must blend seamlessly with the background, the background must remain entirely unchanged, and effect patterns must be learned efficiently from limited paired data. However, existing video editing models fail to satisfy these requirements. IC-Effect leverages the source video as clean contextual conditions, exploiting the contextual learning capability of DiT models to achieve precise background preservation and natural effect injection. A two-stage training strategy, consisting of general editing adaptation followed by effect-specific learning via Effect-LoRA, ensures strong instruction following and robust effect modeling. To further improve efficiency, we introduce spatiotemporal sparse tokenization, enabling high fidelity with substantially reduced computation. We also release a paired VFX editing dataset spanning $15$ high-quality visual styles. Extensive experiments show that IC-Effect delivers high-quality, controllable, and temporally consistent VFX editing, opening new possibilities for video creation.
Similar Papers
In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer
CV and Pattern Recognition
Edits pictures using words, faster and better.
VFXMaster: Unlocking Dynamic Visual Effect Generation via In-Context Learning
CV and Pattern Recognition
Makes any video effect with one example.
Are Image-to-Video Models Good Zero-Shot Image Editors?
CV and Pattern Recognition
Changes pictures using spoken words.