Uncertainty Matters in Dynamic Gaussian Splatting for Monocular 4D Reconstruction
By: Fengzhi Guo , Chih-Chuan Hsu , Sihao Ding and more
Potential Business Impact:
Makes 3D videos more real, even with missing parts.
Reconstructing dynamic 3D scenes from monocular input is fundamentally under-constrained, with ambiguities arising from occlusion and extreme novel views. While dynamic Gaussian Splatting offers an efficient representation, vanilla models optimize all Gaussian primitives uniformly, ignoring whether they are well or poorly observed. This limitation leads to motion drifts under occlusion and degraded synthesis when extrapolating to unseen views. We argue that uncertainty matters: Gaussians with recurring observations across views and time act as reliable anchors to guide motion, whereas those with limited visibility are treated as less reliable. To this end, we introduce USplat4D, a novel Uncertainty-aware dynamic Gaussian Splatting framework that propagates reliable motion cues to enhance 4D reconstruction. Our key insight is to estimate time-varying per-Gaussian uncertainty and leverages it to construct a spatio-temporal graph for uncertainty-aware optimization. Experiments on diverse real and synthetic datasets show that explicitly modeling uncertainty consistently improves dynamic Gaussian Splatting models, yielding more stable geometry under occlusion and high-quality synthesis at extreme viewpoints.
Similar Papers
Geometry-Consistent 4D Gaussian Splatting for Sparse-Input Dynamic View Synthesis
CV and Pattern Recognition
Creates realistic 3D scenes from few pictures.
Dynamic Gaussian Splatting from Defocused and Motion-blurred Monocular Videos
CV and Pattern Recognition
Makes blurry videos look clear for new views.
Splat4D: Diffusion-Enhanced 4D Gaussian Splatting for Temporally and Spatially Consistent Content Creation
CV and Pattern Recognition
Makes 3D videos look real and move smoothly.