DarkDiff: Advancing Low-Light Raw Enhancement by Retasking Diffusion Models for Camera ISP
By: Amber Yijia Zheng , Yu Zhang , Jun Hu and more
Potential Business Impact:
Makes dark photos look clear and colorful.
High-quality photography in extreme low-light conditions is challenging but impactful for digital cameras. With advanced computing hardware, traditional camera image signal processor (ISP) algorithms are gradually being replaced by efficient deep networks that enhance noisy raw images more intelligently. However, existing regression-based models often minimize pixel errors and result in oversmoothing of low-light photos or deep shadows. Recent work has attempted to address this limitation by training a diffusion model from scratch, yet those models still struggle to recover sharp image details and accurate colors. We introduce a novel framework to enhance low-light raw images by retasking pre-trained generative diffusion models with the camera ISP. Extensive experiments demonstrate that our method outperforms the state-of-the-art in perceptual quality across three challenging low-light raw image benchmarks.
Similar Papers
Dark Noise Diffusion: Noise Synthesis for Low-Light Image Denoising
CV and Pattern Recognition
Makes dark photos clear by creating realistic noise.
ISPDiffuser: Learning RAW-to-sRGB Mappings with Texture-Aware Diffusion Models and Histogram-Guided Color Consistency
CV and Pattern Recognition
Makes phone pictures look like fancy camera photos.
TS-Diff: Two-Stage Diffusion Model for Low-Light RAW Image Enhancement
CV and Pattern Recognition
Makes dark photos bright and clear.