WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception
By: Zhiheng Liu , Xueqing Deng , Shoufa Chen and more
Potential Business Impact:
Makes videos look real for longer without errors.
Generative video modeling has made significant strides, yet ensuring structural and temporal consistency over long sequences remains a challenge. Current methods predominantly rely on RGB signals, leading to accumulated errors in object structure and motion over extended durations. To address these issues, we introduce WorldWeaver, a robust framework for long video generation that jointly models RGB frames and perceptual conditions within a unified long-horizon modeling scheme. Our training framework offers three key advantages. First, by jointly predicting perceptual conditions and color information from a unified representation, it significantly enhances temporal consistency and motion dynamics. Second, by leveraging depth cues, which we observe to be more resistant to drift than RGB, we construct a memory bank that preserves clearer contextual information, improving quality in long-horizon video generation. Third, we employ segmented noise scheduling for training prediction groups, which further mitigates drift and reduces computational cost. Extensive experiments on both diffusion- and rectified flow-based models demonstrate the effectiveness of WorldWeaver in reducing temporal drift and improving the fidelity of generated videos.
Similar Papers
WorldGrow: Generating Infinite 3D World
CV and Pattern Recognition
Builds endless, realistic 3D worlds for games.
WorldReel: 4D Video Generation with Consistent Geometry and Motion Modeling
CV and Pattern Recognition
Creates realistic videos that stay the same over time.
GeoWorld: Unlocking the Potential of Geometry Models to Facilitate High-Fidelity 3D Scene Generation
CV and Pattern Recognition
Creates realistic 3D worlds from pictures.