PhysHDR: When Lighting Meets Materials and Scene Geometry in HDR Reconstruction
By: Hrishav Bakul Barua , Kalin Stefanov , Ganesh Krishnasamy and more
Potential Business Impact:
Makes dark photos look bright and clear.
Low Dynamic Range (LDR) to High Dynamic Range (HDR) image translation is a fundamental task in many computational vision problems. Numerous data-driven methods have been proposed to address this problem; however, they lack explicit modeling of illumination, lighting, and scene geometry in images. This limits the quality of the reconstructed HDR images. Since lighting and shadows interact differently with different materials, (e.g., specular surfaces such as glass and metal, and lambertian or diffuse surfaces such as wood and stone), modeling material-specific properties (e.g., specular and diffuse reflectance) has the potential to improve the quality of HDR image reconstruction. This paper presents PhysHDR, a simple yet powerful latent diffusion-based generative model for HDR image reconstruction. The denoising process is conditioned on lighting and depth information and guided by a novel loss to incorporate material properties of surfaces in the scene. The experimental results establish the efficacy of PhysHDR in comparison to a number of recent state-of-the-art methods.
Similar Papers
Reconstructing 3D Scenes in Native High Dynamic Range
CV and Pattern Recognition
Creates super-real 3D worlds from bright, detailed photos.
Boosting HDR Image Reconstruction via Semantic Knowledge Transfer
CV and Pattern Recognition
Improves blurry photos by adding missing details.
Semi-Supervised High Dynamic Range Image Reconstructing via Bi-Level Uncertain Area Masking
CV and Pattern Recognition
Makes better photos with fewer examples.