SeGuE: Semantic Guided Exploration for Mobile Robots
By: Cody Simons , Aritra Samanta , Amit K. Roy-Chowdhury and more
Potential Business Impact:
Robots learn to map places and what's in them.
The rise of embodied AI applications has enabled robots to perform complex tasks which require a sophisticated understanding of their environment. To enable successful robot operation in such settings, maps must be constructed so that they include semantic information, in addition to geometric information. In this paper, we address the novel problem of semantic exploration, whereby a mobile robot must autonomously explore an environment to fully map both its structure and the semantic appearance of features. We develop a method based on next-best-view exploration, where potential poses are scored based on the semantic features visible from that pose. We explore two alternative methods for sampling potential views and demonstrate the effectiveness of our framework in both simulation and physical experiments. Automatic creation of high-quality semantic maps can enable robots to better understand and interact with their environments and enable future embodied AI applications to be more easily deployed.
Similar Papers
Understanding while Exploring: Semantics-driven Active Mapping
Robotics
Robots learn to explore better by choosing what to see.
Where Did I Leave My Glasses? Open-Vocabulary Semantic Exploration in Real-World Semi-Static Environments
Robotics
Robots learn to remember and find things in changing rooms.
SEA: Semantic Map Prediction for Active Exploration of Uncertain Areas
Robotics
Robots learn to map new places faster.