Online 3D Gaussian Splatting Modeling with Novel View Selection
By: Byeonggwon Lee , Junkyu Park , Khang Truong Giang and more
Potential Business Impact:
Creates more complete 3D models from fewer pictures.
This study addresses the challenge of generating online 3D Gaussian Splatting (3DGS) models from RGB-only frames. Previous studies have employed dense SLAM techniques to estimate 3D scenes from keyframes for 3DGS model construction. However, these methods are limited by their reliance solely on keyframes, which are insufficient to capture an entire scene, resulting in incomplete reconstructions. Moreover, building a generalizable model requires incorporating frames from diverse viewpoints to achieve broader scene coverage. However, online processing restricts the use of many frames or extensive training iterations. Therefore, we propose a novel method for high-quality 3DGS modeling that improves model completeness through adaptive view selection. By analyzing reconstruction quality online, our approach selects optimal non-keyframes for additional training. By integrating both keyframes and selected non-keyframes, the method refines incomplete regions from diverse viewpoints, significantly enhancing completeness. We also present a framework that incorporates an online multi-view stereo approach, ensuring consistency in 3D information throughout the 3DGS modeling process. Experimental results demonstrate that our method outperforms state-of-the-art methods, delivering exceptional performance in complex outdoor scenes.
Similar Papers
Online 3D Gaussian Splatting Modeling with Novel View Selection
CV and Pattern Recognition
Makes 3D pictures more complete from videos.
Enhancing Novel View Synthesis from extremely sparse views with SfM-free 3D Gaussian Splatting Framework
CV and Pattern Recognition
Makes 3D pictures from few photos.
Cross-Temporal 3D Gaussian Splatting for Sparse-View Guided Scene Update
CV and Pattern Recognition
Builds 3D worlds from old and new pictures.