MotionFlow:Learning Implicit Motion Flow for Complex Camera Trajectory Control in Video Generation
By: Guojun Lei , Chi Wang , Yikai Wang and more
Potential Business Impact:
Makes videos follow camera moves perfectly.
Generating videos guided by camera trajectories poses significant challenges in achieving consistency and generalizability, particularly when both camera and object motions are present. Existing approaches often attempt to learn these motions separately, which may lead to confusion regarding the relative motion between the camera and the objects. To address this challenge, we propose a novel approach that integrates both camera and object motions by converting them into the motion of corresponding pixels. Utilizing a stable diffusion network, we effectively learn reference motion maps in relation to the specified camera trajectory. These maps, along with an extracted semantic object prior, are then fed into an image-to-video network to generate the desired video that can accurately follow the designated camera trajectory while maintaining consistent object motions. Extensive experiments verify that our model outperforms SOTA methods by a large margin.
Similar Papers
MotionDiff: Training-free Zero-shot Interactive Motion Editing via Flow-assisted Multi-view Diffusion
CV and Pattern Recognition
Changes videos of objects moving in many ways.
Object-centric 3D Motion Field for Robot Learning from Human Videos
Robotics
Robots learn to do tasks by watching videos.
FlowMo: Variance-Based Flow Guidance for Coherent Motion in Video Generation
CV and Pattern Recognition
Makes videos move more smoothly and realistically.