Real2USD: Scene Representations in Universal Scene Description Language
By: Christopher D. Hsu, Pratik Chaudhari
Potential Business Impact:
Robots understand tasks by reading scene descriptions.
Large Language Models (LLMs) can help robots reason about abstract task specifications. This requires augmenting classical representations of the environment used by robots with natural language-based priors. There are a number of existing approaches to doing so, but they are tailored to specific tasks, e.g., visual-language models for navigation, language-guided neural radiance fields for mapping, etc. This paper argues that the Universal Scene Description (USD) language is an effective and general representation of geometric, photometric and semantic information in the environment for LLM-based robotics tasks. Our argument is simple: a USD is an XML-based scene graph, readable by LLMs and humans alike, and rich enough to support essentially any task -- Pixar developed this language to store assets, scenes and even movies. We demonstrate a ``Real to USD'' system using a Unitree Go2 quadruped robot carrying LiDAR and a RGB camera that (i) builds an explicit USD representation of indoor environments with diverse objects and challenging settings with lots of glass, and (ii) parses the USD using Google's Gemini to demonstrate scene understanding, complex inferences, and planning. We also study different aspects of this system in simulated warehouse and hospital settings using Nvidia's Issac Sim. Code is available at https://github.com/grasp-lyrl/Real2USD .
Similar Papers
Generating Actionable Robot Knowledge Bases by Combining 3D Scene Graphs with Robot Ontologies
Robotics
Robots understand their surroundings to make smart choices.
Neural USD: An object-centric framework for iterative editing and control
CV and Pattern Recognition
Lets you change parts of a picture without messing it up.
From Scan to Action: Leveraging Realistic Scans for Embodied Scene Understanding
CV and Pattern Recognition
Makes robots learn and edit real places better.