Score: 1

Yume-1.5: A Text-Controlled Interactive World Generation Model

Published: December 26, 2025 | arXiv ID: 2512.22096v1

By: Xiaofeng Mao , Zhen Li , Chuanhao Li and more

Potential Business Impact:

Creates explorable worlds from pictures or words.

Business Areas:
Simulation Software

Recent approaches have demonstrated the promise of using diffusion models to generate interactive and explorable worlds. However, most of these methods face critical challenges such as excessively large parameter sizes, reliance on lengthy inference steps, and rapidly growing historical context, which severely limit real-time performance and lack text-controlled generation capabilities. To address these challenges, we propose \method, a novel framework designed to generate realistic, interactive, and continuous worlds from a single image or text prompt. \method achieves this through a carefully designed framework that supports keyboard-based exploration of the generated worlds. The framework comprises three core components: (1) a long-video generation framework integrating unified context compression with linear attention; (2) a real-time streaming acceleration strategy powered by bidirectional attention distillation and an enhanced text embedding scheme; (3) a text-controlled method for generating world events. We have provided the codebase in the supplementary material.


Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition