Score: 0

WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception

Published: August 21, 2025 | arXiv ID: 2508.15720v1

By: Zhiheng Liu , Xueqing Deng , Shoufa Chen and more

Potential Business Impact:

Makes videos look real for longer without errors.

Business Areas:
Image Recognition Data and Analytics, Software

Generative video modeling has made significant strides, yet ensuring structural and temporal consistency over long sequences remains a challenge. Current methods predominantly rely on RGB signals, leading to accumulated errors in object structure and motion over extended durations. To address these issues, we introduce WorldWeaver, a robust framework for long video generation that jointly models RGB frames and perceptual conditions within a unified long-horizon modeling scheme. Our training framework offers three key advantages. First, by jointly predicting perceptual conditions and color information from a unified representation, it significantly enhances temporal consistency and motion dynamics. Second, by leveraging depth cues, which we observe to be more resistant to drift than RGB, we construct a memory bank that preserves clearer contextual information, improving quality in long-horizon video generation. Third, we employ segmented noise scheduling for training prediction groups, which further mitigates drift and reduces computational cost. Extensive experiments on both diffusion- and rectified flow-based models demonstrate the effectiveness of WorldWeaver in reducing temporal drift and improving the fidelity of generated videos.

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition