LumiX: Structured and Coherent Text-to-Intrinsic Generation
By: Xu Han , Biao Zhang , Xiangjun Tang and more
Potential Business Impact:
Creates realistic 3D scenes from text descriptions.
We present LumiX, a structured diffusion framework for coherent text-to-intrinsic generation. Conditioned on text prompts, LumiX jointly generates a comprehensive set of intrinsic maps (e.g., albedo, irradiance, normal, depth, and final color), providing a structured and physically consistent description of an underlying scene. This is enabled by two key contributions: 1) Query-Broadcast Attention, a mechanism that ensures structural consistency by sharing queries across all maps in each self-attention block. 2) Tensor LoRA, a tensor-based adaptation that parameter-efficiently models cross-map relations for efficient joint training. Together, these designs enable stable joint diffusion training and unified generation of multiple intrinsic properties. Experiments show that LumiX produces coherent and physically meaningful results, achieving 23% higher alignment and a better preference score (0.19 vs. -0.41) compared to the state of the art, and it can also perform image-conditioned intrinsic decomposition within the same framework.
Similar Papers
LumiTex: Towards High-Fidelity PBR Texture Generation with Illumination Context
CV and Pattern Recognition
Creates realistic textures for computer graphics.
LumiGen: An LVLM-Enhanced Iterative Framework for Fine-Grained Text-to-Image Generation
Machine Learning (CS)
Makes AI draw pictures exactly as you describe.
ReasonX: MLLM-Guided Intrinsic Image Decomposition
CV and Pattern Recognition
Teaches computers to see image parts better.