PRISM: A Unified Framework for Photorealistic Reconstruction and Intrinsic Scene Modeling
By: Alara Dirik , Tuanfeng Wang , Duygu Ceylan and more
Potential Business Impact:
Makes one AI draw pictures and change them.
We present PRISM, a unified framework that enables multiple image generation and editing tasks in a single foundational model. Starting from a pre-trained text-to-image diffusion model, PRISM proposes an effective fine-tuning strategy to produce RGB images along with intrinsic maps (referred to as X layers) simultaneously. Unlike previous approaches, which infer intrinsic properties individually or require separate models for decomposition and conditional generation, PRISM maintains consistency across modalities by generating all intrinsic layers jointly. It supports diverse tasks, including text-to-RGBX generation, RGB-to-X decomposition, and X-to-RGBX conditional generation. Additionally, PRISM enables both global and local image editing through conditioning on selected intrinsic layers and text prompts. Extensive experiments demonstrate the competitive performance of PRISM both for intrinsic image decomposition and conditional image generation while preserving the base model's text-to-image generation capability.
Similar Papers
PRISM: Probabilistic Representation for Integrated Shape Modeling and Generation
CV and Pattern Recognition
Builds 3D objects with many different parts.
PRISM: High-Resolution & Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion
CV and Pattern Recognition
Makes AI better at understanding medical pictures.
PRISM: Pointcloud Reintegrated Inference via Segmentation and Cross-attention for Manipulation
Robotics
Teaches robots to grab things in messy rooms.