Generating Actionable Robot Knowledge Bases by Combining 3D Scene Graphs with Robot Ontologies
By: Giang Nguyen , Mihai Pomarlan , Sascha Jongebloed and more
Potential Business Impact:
Robots understand their surroundings to make smart choices.
In robotics, the effective integration of environmental data into actionable knowledge remains a significant challenge due to the variety and incompatibility of data formats commonly used in scene descriptions, such as MJCF, URDF, and SDF. This paper presents a novel approach that addresses these challenges by developing a unified scene graph model that standardizes these varied formats into the Universal Scene Description (USD) format. This standardization facilitates the integration of these scene graphs with robot ontologies through semantic reporting, enabling the translation of complex environmental data into actionable knowledge essential for cognitive robotic control. We evaluated our approach by converting procedural 3D environments into USD format, which is then annotated semantically and translated into a knowledge graph to effectively answer competency questions, demonstrating its utility for real-time robotic decision-making. Additionally, we developed a web-based visualization tool to support the semantic mapping process, providing users with an intuitive interface to manage the 3D environment.
Similar Papers
Real2USD: Scene Representations in Universal Scene Description Language
Robotics
Robots understand tasks by reading scene descriptions.
Towards Terrain-Aware Task-Driven 3D Scene Graph Generation in Outdoor Environments
Robotics
Helps robots understand outdoor places for better jobs.
Structured Interfaces for Automated Reasoning with 3D Scene Graphs
CV and Pattern Recognition
Robots understand spoken words by seeing objects.