Quantile Rendering: Efficiently Embedding High-dimensional Feature on 3D Gaussian Splatting
By: Yoonwoo Jeong , Cheng Sun , Frank Wang and more
Recent advancements in computer vision have successfully extended Open-vocabulary segmentation (OVS) to the 3D domain by leveraging 3D Gaussian Splatting (3D-GS). Despite this progress, efficiently rendering the high-dimensional features required for open-vocabulary queries poses a significant challenge. Existing methods employ codebooks or feature compression, causing information loss, thereby degrading segmentation quality. To address this limitation, we introduce Quantile Rendering (Q-Render), a novel rendering strategy for 3D Gaussians that efficiently handles high-dimensional features while maintaining high fidelity. Unlike conventional volume rendering, which densely samples all 3D Gaussians intersecting each ray, Q-Render sparsely samples only those with dominant influence along the ray. By integrating Q-Render into a generalizable 3D neural network, we also propose Gaussian Splatting Network (GS-Net), which predicts Gaussian features in a generalizable manner. Extensive experiments on ScanNet and LeRF demonstrate that our framework outperforms state-of-the-art methods, while enabling real-time rendering with an approximate ~43.7x speedup on 512-D feature maps. Code will be made publicly available.
Similar Papers
RAVE: Rate-Adaptive Visual Encoding for 3D Gaussian Splatting
CV and Pattern Recognition
Shrinks 3D scenes for faster, smaller virtual worlds.
From Volume Rendering to 3D Gaussian Splatting: Theory and Applications
CV and Pattern Recognition
Creates realistic 3D worlds from photos fast.
Smol-GS: Compact Representations for Abstract 3D Gaussian Splatting
CV and Pattern Recognition
Shrinks 3D scenes to tiny sizes, keeping detail.