Boosting HDR Image Reconstruction via Semantic Knowledge Transfer
By: Qingsen Yan , Tao Hu , Genggeng Chen and more
Potential Business Impact:
Improves blurry photos by adding missing details.
Recovering High Dynamic Range (HDR) images from multiple Low Dynamic Range (LDR) images becomes challenging when the LDR images exhibit noticeable degradation and missing content. Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions. However, these priors are typically extracted from sRGB Standard Dynamic Range (SDR) images, the domain/format gap poses a significant challenge when applying it to HDR imaging. To address this issue, we propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction. Specifically, the proposed framework first introduces the Semantic Priors Guided Reconstruction Model (SPGRM), which leverages SDR image semantic knowledge to address ill-posed problems in the initial HDR reconstruction results. Subsequently, we leverage a self-distillation mechanism that constrains the color and content information with semantic knowledge, aligning the external outputs between the baseline and SPGRM. Furthermore, to transfer the semantic knowledge of the internal features, we utilize a semantic knowledge alignment module (SKAM) to fill the missing semantic contents with the complementary masks. Extensive experiments demonstrate that our method can significantly improve the HDR imaging quality of existing methods.
Similar Papers
Semi-Supervised High Dynamic Range Image Reconstructing via Bi-Level Uncertain Area Masking
CV and Pattern Recognition
Makes better photos with fewer examples.
PhysHDR: When Lighting Meets Materials and Scene Geometry in HDR Reconstruction
Graphics
Makes dark photos look bright and clear.
Enhanced Semantic Extraction and Guidance for UGC Image Super Resolution
CV and Pattern Recognition
Makes blurry phone pictures sharp and clear.