Statistical Confidence Rescoring for Robust 3D Scene Graph Generation from Multi-View Images
By: Qi Xun Yeo, Yanyan Li, Gim Hee Lee
Potential Business Impact:
Helps computers understand 3D scenes from pictures.
Modern 3D semantic scene graph estimation methods utilize ground truth 3D annotations to accurately predict target objects, predicates, and relationships. In the absence of given 3D ground truth representations, we explore leveraging only multi-view RGB images to tackle this task. To attain robust features for accurate scene graph estimation, we must overcome the noisy reconstructed pseudo point-based geometry from predicted depth maps and reduce the amount of background noise present in multi-view image features. The key is to enrich node and edge features with accurate semantic and spatial information and through neighboring relations. We obtain semantic masks to guide feature aggregation to filter background features and design a novel method to incorporate neighboring node information to aid robustness of our scene graph estimates. Furthermore, we leverage on explicit statistical priors calculated from the training summary statistics to refine node and edge predictions based on their one-hop neighborhood. Our experiments show that our method outperforms current methods purely using multi-view images as the initial input. Our project page is available at https://qixun1.github.io/projects/SCRSSG.
Similar Papers
Integrating Prior Observations for Incremental 3D Scene Graph Prediction
CV and Pattern Recognition
Helps robots understand messy places better.
Robust Scene Coordinate Regression via Geometrically-Consistent Global Descriptors
CV and Pattern Recognition
Helps robots find their way better in new places.
Scene Coordinate Reconstruction Priors
CV and Pattern Recognition
Makes 3D pictures more real and accurate.