Yume-1.5: A Text-Controlled Interactive World Generation Model
By: Xiaofeng Mao , Zhen Li , Chuanhao Li and more
Potential Business Impact:
Creates explorable worlds from pictures or words.
Recent approaches have demonstrated the promise of using diffusion models to generate interactive and explorable worlds. However, most of these methods face critical challenges such as excessively large parameter sizes, reliance on lengthy inference steps, and rapidly growing historical context, which severely limit real-time performance and lack text-controlled generation capabilities. To address these challenges, we propose \method, a novel framework designed to generate realistic, interactive, and continuous worlds from a single image or text prompt. \method achieves this through a carefully designed framework that supports keyboard-based exploration of the generated worlds. The framework comprises three core components: (1) a long-video generation framework integrating unified context compression with linear attention; (2) a real-time streaming acceleration strategy powered by bidirectional attention distillation and an enhanced text embedding scheme; (3) a text-controlled method for generating world events. We have provided the codebase in the supplementary material.
Similar Papers
Yume: An Interactive World Generation Model
CV and Pattern Recognition
Creates explorable worlds from single pictures.
Hunyuan-GameCraft-2: Instruction-following Interactive Game World Model
CV and Pattern Recognition
Makes game worlds react to your spoken commands.
WorldGen: From Text to Traversable and Interactive 3D Worlds
CV and Pattern Recognition
Creates 3D worlds from your words.