UPGS: Unified Pose-aware Gaussian Splatting for Dynamic Scene Deblurring
By: Zhijing Wu, Longguang Wang
Potential Business Impact:
Fixes blurry videos for better 3D worlds.
Reconstructing dynamic 3D scenes from monocular video has broad applications in AR/VR, robotics, and autonomous navigation, but often fails due to severe motion blur caused by camera and object motion. Existing methods commonly follow a two-step pipeline, where camera poses are first estimated and then 3D Gaussians are optimized. Since blurring artifacts usually undermine pose estimation, pose errors could be accumulated to produce inferior reconstruction results. To address this issue, we introduce a unified optimization framework by incorporating camera poses as learnable parameters complementary to 3DGS attributes for end-to-end optimization. Specifically, we recast camera and object motion as per-primitive SE(3) affine transformations on 3D Gaussians and formulate a unified optimization objective. For stable optimization, we introduce a three-stage training schedule that optimizes camera poses and Gaussians alternatively. Particularly, 3D Gaussians are first trained with poses being fixed, and then poses are optimized with 3D Gaussians being untouched. Finally, all learnable parameters are optimized together. Extensive experiments on the Stereo Blur dataset and challenging real-world sequences demonstrate that our method achieves significant gains in reconstruction quality and pose estimation accuracy over prior dynamic deblurring methods.
Similar Papers
Unposed 3DGS Reconstruction with Probabilistic Procrustes Mapping
CV and Pattern Recognition
Creates detailed 3D worlds from many photos.
3R-GS: Best Practice in Optimizing Camera Poses Along with 3DGS
CV and Pattern Recognition
Makes 3D pictures look real, even with bad camera data.
ProDyG: Progressive Dynamic Scene Reconstruction via Gaussian Splatting from Monocular Videos
CV and Pattern Recognition
Builds 3D worlds from videos in real-time.