Real-Time Obstacle Avoidance for a Mobile Robot Using CNN-Based Sensor Fusion
By: Lamiaa H. Zain, Raafat E. Shalaby
Potential Business Impact:
Robots learn to steer around anything they see.
Obstacle avoidance is a critical component of the navigation stack required for mobile robots to operate effectively in complex and unknown environments. In this research, three end-to-end Convolutional Neural Networks (CNNs) were trained and evaluated offline and deployed on a differential-drive mobile robot for real-time obstacle avoidance to generate low-level steering commands from synchronized color and depth images acquired by an Intel RealSense D415 RGB-D camera in diverse environments. Offline evaluation showed that the NetConEmb model achieved the best performance with a notably low MedAE of $0.58 \times 10^{-3}$ rad/s. In comparison, the lighter NetEmb architecture adopted in this study, which reduces the number of trainable parameters by approximately 25\% and converges faster, produced comparable results with an RMSE of $21.68 \times 10^{-3}$ rad/s, close to the $21.42 \times 10^{-3}$ rad/s obtained by NetConEmb. Real-time navigation further confirmed NetConEmb's robustness, achieving a 100\% success rate in both known and unknown environments, while NetEmb and NetGated succeeded only in navigating the known environment.
Similar Papers
Imitation Learning for Obstacle Avoidance Using End-to-End CNN-Based Sensor Fusion
Robotics
Robots learn to steer around obstacles using cameras.
Self-localization on a 3D map by fusing global and local features from a monocular camera
Robotics
Helps self-driving cars see better with people.
Mini Autonomous Car Driving based on 3D Convolutional Neural Networks
Robotics
Teaches tiny cars to drive themselves safely.