Interpretable Unsupervised Joint Denoising and Enhancement for Real-World low-light Scenarios
By: Huaqiu Li, Xiaowan Hu, Haoqian Wang
Potential Business Impact:
Fixes dark, noisy pictures without needing good examples.
Real-world low-light images often suffer from complex degradations such as local overexposure, low brightness, noise, and uneven illumination. Supervised methods tend to overfit to specific scenarios, while unsupervised methods, though better at generalization, struggle to model these degradations due to the lack of reference images. To address this issue, we propose an interpretable, zero-reference joint denoising and low-light enhancement framework tailored for real-world scenarios. Our method derives a training strategy based on paired sub-images with varying illumination and noise levels, grounded in physical imaging principles and retinex theory. Additionally, we leverage the Discrete Cosine Transform (DCT) to perform frequency domain decomposition in the sRGB space, and introduce an implicit-guided hybrid representation strategy that effectively separates intricate compounded degradations. In the backbone network design, we develop retinal decomposition network guided by implicit degradation representation mechanisms. Extensive experiments demonstrate the superiority of our method. Code will be available at https://github.com/huaqlili/unsupervised-light-enhance-ICLR2025.
Similar Papers
A Poisson-Guided Decomposition Network for Extreme Low-Light Image Enhancement
Image and Video Processing
Makes dark pictures clear and bright.
Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for Low-Light Image Enhancement
CV and Pattern Recognition
Makes dark pictures clear and detailed.
Self-supervision via Controlled Transformation and Unpaired Self-conditioning for Low-light Image Enhancement
CV and Pattern Recognition
Makes dark pictures clear without needing matching pairs.