Dynamic Gaussian Splatting from Defocused and Motion-blurred Monocular Videos
By: Xuankai Zhang, Junjin Xiao, Qing Zhang
Potential Business Impact:
Makes blurry videos look clear for new views.
This paper presents a unified framework that allows high-quality dynamic Gaussian Splatting from both defocused and motion-blurred monocular videos. Due to the significant difference between the formation processes of defocus blur and motion blur, existing methods are tailored for either one of them, lacking the ability to simultaneously deal with both of them. Although the two can be jointly modeled as blur kernel-based convolution, the inherent difficulty in estimating accurate blur kernels greatly limits the progress in this direction. In this work, we go a step further towards this direction. Particularly, we propose to estimate per-pixel reliable blur kernels using a blur prediction network that exploits blur-related scene and camera information and is subject to a blur-aware sparsity constraint. Besides, we introduce a dynamic Gaussian densification strategy to mitigate the lack of Gaussians for incomplete regions, and boost the performance of novel view synthesis by incorporating unseen view information to constrain scene optimization. Extensive experiments show that our method outperforms the state-of-the-art methods in generating photorealistic novel view synthesis from defocused and motion-blurred monocular videos. Our code and trained model will be made publicly available.
Similar Papers
Uncertainty Matters in Dynamic Gaussian Splatting for Monocular 4D Reconstruction
CV and Pattern Recognition
Makes 3D videos more real, even with missing parts.
VDEGaussian: Video Diffusion Enhanced 4D Gaussian Splatting for Dynamic Urban Scenes Modeling
CV and Pattern Recognition
Makes videos of moving things look clearer.
SplitGaussian: Reconstructing Dynamic Scenes via Visual Geometry Decomposition
CV and Pattern Recognition
Makes videos look smoother and more real.