Score: 1

ViewMask-1-to-3: Multi-View Consistent Image Generation via Multimodal Diffusion Models

Published: December 16, 2025 | arXiv ID: 2512.14099v1

By: Ruishu Zhu , Zhihao Huang , Jiacheng Sun and more

Potential Business Impact:

Creates many pictures of one thing from text.

Business Areas:
Image Recognition Data and Analytics, Software

Multi-view image generation from a single image and text description remains challenging due to the difficulty of maintaining geometric consistency across different viewpoints. Existing approaches typically rely on 3D-aware architectures or specialized diffusion models that require extensive multi-view training data and complex geometric priors. In this work, we introduce ViewMask-1-to-3, a pioneering approach to apply discrete diffusion models to multi-view image generation. Unlike continuous diffusion methods that operate in latent spaces, ViewMask-1-to-3 formulates multi-view synthesis as a discrete sequence modeling problem, where each viewpoint is represented as visual tokens obtained through MAGVIT-v2 tokenization. By unifying language and vision through masked token prediction, our approach enables progressive generation of multiple viewpoints through iterative token unmasking with text input. ViewMask-1-to-3 achieves cross-view consistency through simple random masking combined with self-attention, eliminating the requirement for complex 3D geometric constraints or specialized attention architectures. Our approach demonstrates that discrete diffusion provides a viable and simple alternative to existing multi-view generation methods, ranking first on average across GSO and 3D-FUTURE datasets in terms of PSNR, SSIM, and LPIPS, while maintaining architectural simplicity.

Country of Origin
🇨🇳 🇭🇰 China, Hong Kong

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition