SVR-GS: Spatially Variant Regularization for Probabilistic Masks in 3D Gaussian Splatting
By: Ashkan Taghipour , Vahid Naghshin , Benjamin Southwell and more
Potential Business Impact:
Makes 3D images smaller and faster to show.
3D Gaussian Splatting (3DGS) enables fast, high-quality novel view synthesis but typically relies on densification followed by pruning to optimize the number of Gaussians. Existing mask-based pruning, such as MaskGS, regularizes the global mean of the mask, which is misaligned with the local per-pixel (per-ray) reconstruction loss that determines image quality along individual camera rays. This paper introduces SVR-GS, a spatially variant regularizer that renders a per-pixel spatial mask from each Gaussian's effective contribution along the ray, thereby applying sparsity pressure where it matters: on low-importance Gaussians. We explore three spatial-mask aggregation strategies, implement them in CUDA, and conduct a gradient analysis to motivate our final design. Extensive experiments on Tanks\&Temples, Deep Blending, and Mip-NeRF360 datasets demonstrate that, on average across the three datasets, the proposed SVR-GS reduces the number of Gaussians by 1.79\(\times\) compared to MaskGS and 5.63\(\times\) compared to 3DGS, while incurring only 0.50 dB and 0.40 dB PSNR drops, respectively. These gains translate into significantly smaller, faster, and more memory-efficient models, making them well-suited for real-time applications such as robotics, AR/VR, and mobile perception.
Similar Papers
MVGSR: Multi-View Consistency Gaussian Splatting for Robust Surface Reconstruction
CV and Pattern Recognition
Makes 3D models from moving pictures accurately.
DET-GS: Depth- and Edge-Aware Regularization for High-Fidelity 3D Gaussian Splatting
CV and Pattern Recognition
Makes 3D pictures look real from few photos.
Evolving High-Quality Rendering and Reconstruction in a Unified Framework with Contribution-Adaptive Regularization
CV and Pattern Recognition
Creates realistic 3D worlds from pictures faster.