Improving the color accuracy of lighting estimation models
By: Zitian Zhang , Joshua Urban Davis , Jeanne Phuong Anh Vu and more
Potential Business Impact:
Makes virtual objects look real in photos.
Advances in high dynamic range (HDR) lighting estimation from a single image have opened new possibilities for augmented reality (AR) applications. Predicting complex lighting environments from a single input image allows for the realistic rendering and compositing of virtual objects. In this work, we investigate the color robustness of such methods -- an often overlooked yet critical factor for achieving visual realism. While most evaluations conflate color with other lighting attributes (e.g., intensity, direction), we isolate color as the primary variable of interest. Rather than introducing a new lighting estimation algorithm, we explore whether simple adaptation techniques can enhance the color accuracy of existing models. Using a novel HDR dataset featuring diverse lighting colors, we systematically evaluate several adaptation strategies. Our results show that preprocessing the input image with a pre-trained white balance network improves color robustness, outperforming other strategies across all tested scenarios. Notably, this approach requires no retraining of the lighting estimation model. We further validate the generality of this finding by applying the technique to three state-of-the-art lighting estimation methods from recent literature.
Similar Papers
Revisiting Image Fusion for Multi-Illuminant White-Balance Correction
CV and Pattern Recognition
Fixes photos with mixed lighting better.
Boosting Illuminant Estimation in Deep Color Constancy through Enhancing Brightness Robustness
CV and Pattern Recognition
Fixes pictures that look weird in different light.
PhysHDR: When Lighting Meets Materials and Scene Geometry in HDR Reconstruction
Graphics
Makes dark photos look bright and clear.