Imitation Learning for Obstacle Avoidance Using End-to-End CNN-Based Sensor Fusion
By: Lamiaa H. Zain, Hossam H. Ammar, Raafat E. Shalaby
Potential Business Impact:
Robots learn to steer around obstacles using cameras.
Obstacle avoidance is crucial for mobile robots' navigation in both known and unknown environments. This research designs, trains, and tests two custom Convolutional Neural Networks (CNNs), using color and depth images from a depth camera as inputs. Both networks adopt sensor fusion to produce an output: the mobile robot's angular velocity, which serves as the robot's steering command. A newly obtained visual dataset for navigation was collected in diverse environments with varying lighting conditions and dynamic obstacles. During data collection, a communication link was established over Wi-Fi between a remote server and the robot, using Robot Operating System (ROS) topics. Velocity commands were transmitted from the server to the robot, enabling synchronized recording of visual data and the corresponding steering commands. Various evaluation metrics, such as Mean Squared Error, Variance Score, and Feed-Forward time, provided a clear comparison between the two networks and clarified which one to use for the application.
Similar Papers
Real-Time Obstacle Avoidance for a Mobile Robot Using CNN-Based Sensor Fusion
Robotics
Robots learn to steer around anything they see.
Deep Learning-Based Multi-Modal Fusion for Robust Robot Perception and Navigation
Machine Learning (CS)
Helps robots see and move better in tricky places.
Industrial Internet Robot Collaboration System and Edge Computing Optimization
Robotics
Helps robots avoid obstacles and reach goals faster.