Revisiting Image Fusion for Multi-Illuminant White-Balance Correction
By: David Serrano-Lozano , Aditya Arora , Luis Herranz and more
Potential Business Impact:
Fixes photos with mixed lighting better.
White balance (WB) correction in scenes with multiple illuminants remains a persistent challenge in computer vision. Recent methods explored fusion-based approaches, where a neural network linearly blends multiple sRGB versions of an input image, each processed with predefined WB presets. However, we demonstrate that these methods are suboptimal for common multi-illuminant scenarios. Additionally, existing fusion-based methods rely on sRGB WB datasets lacking dedicated multi-illuminant images, limiting both training and evaluation. To address these challenges, we introduce two key contributions. First, we propose an efficient transformer-based model that effectively captures spatial dependencies across sRGB WB presets, substantially improving upon linear fusion techniques. Second, we introduce a large-scale multi-illuminant dataset comprising over 16,000 sRGB images rendered with five different WB settings, along with WB-corrected images. Our method achieves up to 100\% improvement over existing techniques on our new multi-illuminant image fusion dataset.
Similar Papers
Multi-illuminant Color Constancy via Multi-scale Illuminant Estimation and Fusion
CV and Pattern Recognition
Fixes weird colors in pictures from different lights.
FusionNet: Multi-model Linear Fusion Framework for Low-light Image Enhancement
CV and Pattern Recognition
Makes dark pictures bright and clear.
Improving the color accuracy of lighting estimation models
CV and Pattern Recognition
Makes virtual objects look real in photos.