VideoMat: Extracting PBR Materials from Video Diffusion Models
By: Jacob Munkberg , Zian Wang , Ruofan Liang and more
Potential Business Impact:
Makes 3D objects look real from text or pictures.
We leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.
Similar Papers
SViM3D: Stable Video Material Diffusion for Single Image 3D Generation
Graphics
Makes 3D objects look real with new lighting.
SViM3D: Stable Video Material Diffusion for Single Image 3D Generation
Graphics
Makes flat pictures look like real 3D objects.
MaterialMVP: Illumination-Invariant Material Generation via Multi-view PBR Diffusion
CV and Pattern Recognition
Makes 3D objects look real in any light.