LongVie 2: Multimodal Controllable Ultra-Long Video World Model
By: Jianxiong Gao , Zhaoxi Chen , Xian Liu and more
Potential Business Impact:
Makes videos that stay real and make sense.
Building video world models upon pretrained video generation systems represents an important yet challenging step toward general spatiotemporal intelligence. A world model should possess three essential properties: controllability, long-term visual quality, and temporal consistency. To this end, we take a progressive approach-first enhancing controllability and then extending toward long-term, high-quality generation. We present LongVie 2, an end-to-end autoregressive framework trained in three stages: (1) Multi-modal guidance, which integrates dense and sparse control signals to provide implicit world-level supervision and improve controllability; (2) Degradation-aware training on the input frame, bridging the gap between training and long-term inference to maintain high visual quality; and (3) History-context guidance, which aligns contextual information across adjacent clips to ensure temporal consistency. We further introduce LongVGenBench, a comprehensive benchmark comprising 100 high-resolution one-minute videos covering diverse real-world and synthetic environments. Extensive experiments demonstrate that LongVie 2 achieves state-of-the-art performance in long-range controllability, temporal coherence, and visual fidelity, and supports continuous video generation lasting up to five minutes, marking a significant step toward unified video world modeling.
Similar Papers
LongVie: Multimodal-Guided Controllable Ultra-Long Video Generation
CV and Pattern Recognition
Makes computers create very long, clear, and controlled videos.
LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling
CV and Pattern Recognition
Helps computers understand long videos better.
WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception
CV and Pattern Recognition
Makes videos look real for longer without errors.