Finding 3D Scene Analogies with Multimodal Foundation Models
By: Junho Kim, Young Min Kim
Potential Business Impact:
Robots learn new places by comparing them to old ones.
Connecting current observations with prior experiences helps robots adapt and plan in new, unseen 3D environments. Recently, 3D scene analogies have been proposed to connect two 3D scenes, which are smooth maps that align scene regions with common spatial relationships. These maps enable detailed transfer of trajectories or waypoints, potentially supporting demonstration transfer for imitation learning or task plan transfer across scenes. However, existing methods for the task require additional training and fixed object vocabularies. In this work, we propose to use multimodal foundation models for finding 3D scene analogies in a zero-shot, open-vocabulary setting. Central to our approach is a hybrid neural representation of scenes that consists of a sparse graph based on vision-language model features and a feature field derived from 3D shape foundation models. 3D scene analogies are then found in a coarse-to-fine manner, by first aligning the graph and refining the correspondence with feature fields. Our method can establish accurate correspondences between complex scenes, and we showcase applications in trajectory and waypoint transfer.
Similar Papers
HMR3D: Hierarchical Multimodal Representation for 3D Scene Understanding with Large Vision-Language Model
CV and Pattern Recognition
Helps computers understand 3D spaces from pictures and words.
Analogical Learning for Cross-Scenario Generalization: Framework and Application to Intelligent Localization
Machine Learning (CS)
Teaches computers to learn from new situations.
What Is The Best 3D Scene Representation for Robotics? From Geometric to Foundation Models
Robotics
Helps robots understand and move in the real world.