InsertAnywhere: Bridging 4D Scene Geometry and Diffusion Models for Realistic Video Object Insertion
By: Hoiyeong Jin , Hyojin Jang , Jeongho Kim and more
Recent advances in diffusion-based video generation have opened new possibilities for controllable video editing, yet realistic video object insertion (VOI) remains challenging due to limited 4D scene understanding and inadequate handling of occlusion and lighting effects. We present InsertAnywhere, a new VOI framework that achieves geometrically consistent object placement and appearance-faithful video synthesis. Our method begins with a 4D aware mask generation module that reconstructs the scene geometry and propagates user specified object placement across frames while maintaining temporal coherence and occlusion consistency. Building upon this spatial foundation, we extend a diffusion based video generation model to jointly synthesize the inserted object and its surrounding local variations such as illumination and shading. To enable supervised training, we introduce ROSE++, an illumination aware synthetic dataset constructed by transforming the ROSE object removal dataset into triplets of object removed video, object present video, and a VLM generated reference image. Through extensive experiments, we demonstrate that our framework produces geometrically plausible and visually coherent object insertions across diverse real world scenarios, significantly outperforming existing research and commercial models.
Similar Papers
VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control
CV and Pattern Recognition
Puts any object into videos perfectly.
OmniInsert: Mask-Free Video Insertion of Any Reference via Diffusion Transformer Models
CV and Pattern Recognition
Puts new people into videos perfectly.
Virtually Being: Customizing Camera-Controllable Video Diffusion Models with Multi-View Performance Captures
CV and Pattern Recognition
Makes videos with characters that look the same.