Deep Learning Perspective of Scene Understanding in Autonomous Robots
By: Afia Maham, Dur E Nayab Tashfa
Potential Business Impact:
Helps robots see and understand the world.
This paper provides a review of deep learning applications in scene understanding in autonomous robots, including innovations in object detection, semantic and instance segmentation, depth estimation, 3D reconstruction, and visual SLAM. It emphasizes how these techniques address limitations of traditional geometric models, improve depth perception in real time despite occlusions and textureless surfaces, and enhance semantic reasoning to understand the environment better. When these perception modules are integrated into dynamic and unstructured environments, they become more effective in decisionmaking, navigation and interaction. Lastly, the review outlines the existing problems and research directions to advance learning-based scene understanding of autonomous robots.
Similar Papers
An Analytical Framework to Enhance Autonomous Vehicle Perception for Smart Cities
Artificial Intelligence
Helps self-driving cars see and understand roads.
nuScenes Revisited: Progress and Challenges in Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars learn from real-world driving.
Large Language Models and 3D Vision for Intelligent Robotic Perception and Autonomy: A Review
Robotics
Robots understand and act on spoken commands.