3D Gaussian Representations with Motion Trajectory Field for Dynamic Scene Reconstruction
By: Xuesong Li, Lars Petersson, Vivien Rolland
Potential Business Impact:
Makes videos show moving things from new angles.
This paper addresses the challenge of novel-view synthesis and motion reconstruction of dynamic scenes from monocular video, which is critical for many robotic applications. Although Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have demonstrated remarkable success in rendering static scenes, extending them to reconstruct dynamic scenes remains challenging. In this work, we introduce a novel approach that combines 3DGS with a motion trajectory field, enabling precise handling of complex object motions and achieving physically plausible motion trajectories. By decoupling dynamic objects from static background, our method compactly optimizes the motion trajectory field. The approach incorporates time-invariant motion coefficients and shared motion trajectory bases to capture intricate motion patterns while minimizing optimization complexity. Extensive experiments demonstrate that our approach achieves state-of-the-art results in both novel-view synthesis and motion trajectory recovery from monocular video, advancing the capabilities of dynamic scene reconstruction.
Similar Papers
4D3R: Motion-Aware Neural Reconstruction and Rendering of Dynamic Scenes from Monocular Videos
CV and Pattern Recognition
Creates realistic 3D videos from regular videos.
Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field
CV and Pattern Recognition
Makes videos look real by understanding movement.
Dy3DGS-SLAM: Monocular 3D Gaussian Splatting SLAM for Dynamic Environments
CV and Pattern Recognition
Maps moving things using only one camera.