Online Object-Level Semantic Mapping for Quadrupeds in Real-World Environments
By: Emad Razavi , Angelo Bratta , João Carlos Virgolino Soares and more
Potential Business Impact:
Robot learns and remembers objects in a room.
We present an online semantic object mapping system for a quadruped robot operating in real indoor environments, turning sensor detections into named objects in a global map. During a run, the mapper integrates range geometry with camera detections, merges co-located detections within a frame, and associates repeated detections into persistent object instances across frames. Objects remain in the map when they are out of view, and repeated sightings update the same instance rather than creating duplicates. The output is a compact object layer that can be queried (class, pose, and confidence), is integrated with the occupancy map and readable by a planner. In on-robot tests, the layer remained stable across viewpoint changes.
Similar Papers
OmniMap: A General Mapping Framework Integrating Optics, Geometry, and Semantics
Robotics
Robots see and understand the world perfectly.
RSV-SLAM: Toward Real-Time Semantic Visual SLAM in Indoor Dynamic Environments
Robotics
Helps robots see and move in busy places.
Real-Time 3D Vision-Language Embedding Mapping
Robotics
Robots understand and find objects by voice.