Score: 1

WorldPack: Compressed Memory Improves Spatial Consistency in Video World Modeling

Published: December 2, 2025 | arXiv ID: 2512.02473v1

By: Yuta Oshima , Yusuke Iwasawa , Masahiro Suzuki and more

Potential Business Impact:

Lets computers imagine future video scenes better.

Business Areas:
Motion Capture Media and Entertainment, Video

Video world models have attracted significant attention for their ability to produce high-fidelity future visual observations conditioned on past observations and navigation actions. Temporally- and spatially-consistent, long-term world modeling has been a long-standing problem, unresolved with even recent state-of-the-art models, due to the prohibitively expensive computational costs for long-context inputs. In this paper, we propose WorldPack, a video world model with efficient compressed memory, which significantly improves spatial consistency, fidelity, and quality in long-term generation despite much shorter context length. Our compressed memory consists of trajectory packing and memory retrieval; trajectory packing realizes high context efficiency, and memory retrieval maintains the consistency in rollouts and helps long-term generations that require spatial reasoning. Our performance is evaluated with LoopNav, a benchmark on Minecraft, specialized for the evaluation of long-term consistency, and we verify that WorldPack notably outperforms strong state-of-the-art models.

Country of Origin
🇯🇵 Japan

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition