Intelligent Spatial Perception by Building Hierarchical 3D Scene Graphs for Indoor Scenarios with the Help of LLMs
By: Yao Cheng , Zhe Han , Fengyang Jiang and more
Potential Business Impact:
Helps robots understand buildings to move around better.
This paper addresses the high demand in advanced intelligent robot navigation for a more holistic understanding of spatial environments, by introducing a novel system that harnesses the capabilities of Large Language Models (LLMs) to construct hierarchical 3D Scene Graphs (3DSGs) for indoor scenarios. The proposed framework constructs 3DSGs consisting of a fundamental layer with rich metric-semantic information, an object layer featuring precise point-cloud representation of object nodes as well as visual descriptors, and higher layers of room, floor, and building nodes. Thanks to the innovative application of LLMs, not only object nodes but also nodes of higher layers, e.g., room nodes, are annotated in an intelligent and accurate manner. A polling mechanism for room classification using LLMs is proposed to enhance the accuracy and reliability of the room node annotation. Thorough numerical experiments demonstrate the system's ability to integrate semantic descriptions with geometric data, creating an accurate and comprehensive representation of the environment instrumental for context-aware navigation and task planning.
Similar Papers
SpatialLM: Training Large Language Models for Structured Indoor Modeling
CV and Pattern Recognition
Lets computers understand 3D spaces like rooms.
Hierarchical Language Models for Semantic Navigation and Manipulation in an Aerial-Ground Robotic System
Robotics
Robots work together better using AI to move things.
How to Enable LLM with 3D Capacity? A Survey of Spatial Reasoning in LLM
CV and Pattern Recognition
Helps computers understand 3D worlds like we do.