Robust Mesh Saliency GT Acquisition in VR via View Cone Sampling and Geometric Smoothing
By: Guoquan Zheng , Jie Hao , Huiyu Duan and more
Potential Business Impact:
Makes VR see where people look better.
Reliable 3D mesh saliency ground truth (GT) is essential for human-centric visual modeling in virtual reality (VR). However, current 3D mesh saliency GT acquisition methods are generally consistent with 2D image methods, ignoring the differences between 3D geometry topology and 2D image array. Current VR eye-tracking pipelines rely on single ray sampling and Euclidean smoothing, triggering texture attention and signal leakage across gaps. This paper proposes a robust framework to address these limitations. We first introduce a view cone sampling (VCS) strategy, which simulates the human foveal receptive field via Gaussian-distributed ray bundles to improve sampling robustness for complex topologies. Furthermore, a hybrid Manifold-Euclidean constrained diffusion (HCD) algorithm is developed, fusing manifold geodesic constraints with Euclidean scales to ensure topologically-consistent saliency propagation. By mitigating "topological short-circuits" and aliasing, our framework provides a high-fidelity 3D attention acquisition paradigm that aligns with natural human perception, offering a more accurate and robust baseline for 3D mesh saliency research.
Similar Papers
IntelliCap: Intelligent Guidance for Consistent View Sampling
CV and Pattern Recognition
Guides cameras to take perfect pictures for 3D scenes.
360-GeoGS: Geometrically Consistent Feed-Forward 3D Gaussian Splatting Reconstruction for 360 Images
CV and Pattern Recognition
Creates accurate 3D worlds from pictures fast.
Tessellation GS: Neural Mesh Gaussians for Robust Monocular Reconstruction of Dynamic Objects
CV and Pattern Recognition
Makes 3D scenes look real from one camera.