Score: 0

Imitation Learning for Obstacle Avoidance Using End-to-End CNN-Based Sensor Fusion

Published: July 10, 2025 | arXiv ID: 2507.08112v1

By: Lamiaa H. Zain, Hossam H. Ammar, Raafat E. Shalaby

Potential Business Impact:

Robots learn to steer around obstacles using cameras.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Obstacle avoidance is crucial for mobile robots' navigation in both known and unknown environments. This research designs, trains, and tests two custom Convolutional Neural Networks (CNNs), using color and depth images from a depth camera as inputs. Both networks adopt sensor fusion to produce an output: the mobile robot's angular velocity, which serves as the robot's steering command. A newly obtained visual dataset for navigation was collected in diverse environments with varying lighting conditions and dynamic obstacles. During data collection, a communication link was established over Wi-Fi between a remote server and the robot, using Robot Operating System (ROS) topics. Velocity commands were transmitted from the server to the robot, enabling synchronized recording of visual data and the corresponding steering commands. Various evaluation metrics, such as Mean Squared Error, Variance Score, and Feed-Forward time, provided a clear comparison between the two networks and clarified which one to use for the application.

Country of Origin
🇪🇬 Egypt

Page Count
7 pages

Category
Computer Science:
Robotics