Constraint-Based Modeling of Dynamic Entities in 3D Scene Graphs for Robust SLAM
By: Marco Giberna , Muhammad Shaheer , Hriday Bavle and more
Potential Business Impact:
Robots see and track moving things better.
Autonomous robots depend crucially on their ability to perceive and process information from dynamic, ever-changing environments. Traditional simultaneous localization and mapping (SLAM) approaches struggle to maintain consistent scene representations because of numerous moving objects, often treating dynamic elements as outliers rather than explicitly modeling them in the scene representation. In this paper, we present a novel hierarchical 3D scene graph-based SLAM framework that addresses the challenge of modeling and estimating the pose of dynamic objects and agents. We use fiducial markers to detect dynamic entities and to extract their attributes while improving keyframe selection and implementing new capabilities for dynamic entity mapping. We maintain a hierarchical representation where dynamic objects are registered in the SLAM graph and are constrained with robot keyframes and the floor level of the building with our novel entity-keyframe constraints and intra-entity constraints. By combining semantic and geometric constraints between dynamic entities and the environment, our system jointly optimizes the SLAM graph to estimate the pose of the robot and various dynamic agents and objects while maintaining an accurate map. Experimental evaluation demonstrates that our approach achieves a 27.57% reduction in pose estimation error compared to traditional methods and enables higher-level reasoning about scene dynamics.
Similar Papers
RSV-SLAM: Toward Real-Time Semantic Visual SLAM in Indoor Dynamic Environments
Robotics
Helps robots see and move in busy places.
Dynamic Visual SLAM using a General 3D Prior
Robotics
Helps robots see and map moving things.
Leveraging Semantic Graphs for Efficient and Robust LiDAR SLAM
Robotics
Helps robots understand where they are and what's around.