Score: 0

Casual3DHDR: Deblurring High Dynamic Range 3D Gaussian Splatting from Casually Captured Videos

Published: April 24, 2025 | arXiv ID: 2504.17728v3

By: Shucheng Gong , Lingzhe Zhao , Wenpu Li and more

Potential Business Impact:

Makes realistic 3D pictures from shaky videos.

Business Areas:
Virtual Reality Hardware, Software

Photo-realistic novel view synthesis from multi-view images, such as neural radiance field (NeRF) and 3D Gaussian Splatting (3DGS), has gained significant attention for its superior performance. However, most existing methods rely on low dynamic range (LDR) images, limiting their ability to capture detailed scenes in high-contrast environments. While some prior works address high dynamic range (HDR) scene reconstruction, they typically require multi-view sharp images with varying exposure times captured at fixed camera positions, which is time-consuming and impractical. To make data acquisition more flexible, we propose \textbf{Casual3DHDR}, a robust one-stage method that reconstructs 3D HDR scenes from casually-captured auto-exposure (AE) videos, even under severe motion blur and unknown, varying exposure times. Our approach integrates a continuous-time camera trajectory into a unified physical imaging model, jointly optimizing exposure times, camera trajectory, and the camera response function (CRF). Extensive experiments on synthetic and real-world datasets demonstrate that \textbf{Casual3DHDR} outperforms existing methods in robustness and rendering quality. Our source code and dataset will be available at https://lingzhezhao.github.io/CasualHDRSplat/

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition