FastGS: Training 3D Gaussian Splatting in 100 Seconds
By: Shiwei Ren , Tianci Wen , Yongchun Fang and more
Potential Business Impact:
Makes 3D pictures build much faster.
The dominant 3D Gaussian splatting (3DGS) acceleration methods fail to properly regulate the number of Gaussians during training, causing redundant computational time overhead. In this paper, we propose FastGS, a novel, simple, and general acceleration framework that fully considers the importance of each Gaussian based on multi-view consistency, efficiently solving the trade-off between training time and rendering quality. We innovatively design a densification and pruning strategy based on multi-view consistency, dispensing with the budgeting mechanism. Extensive experiments on Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets demonstrate that our method significantly outperforms the state-of-the-art methods in training speed, achieving a 3.32$\times$ training acceleration and comparable rendering quality compared with DashGaussian on the Mip-NeRF 360 dataset and a 15.45$\times$ acceleration compared with vanilla 3DGS on the Deep Blending dataset. We demonstrate that FastGS exhibits strong generality, delivering 2-7$\times$ training acceleration across various tasks, including dynamic scene reconstruction, surface reconstruction, sparse-view reconstruction, large-scale reconstruction, and simultaneous localization and mapping. The project page is available at https://fastgs.github.io/
Similar Papers
FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting
CV and Pattern Recognition
Makes 3D pictures work on less powerful computers.
Scale-GS: Efficient Scalable Gaussian Splatting via Redundancy-filtering Training on Streaming Content
CV and Pattern Recognition
Makes videos of moving things look real, faster.
BalanceGS: Algorithm-System Co-design for Efficient 3D Gaussian Splatting Training on GPU
CV and Pattern Recognition
Makes 3D pictures build much faster.