Digital Twin Supervised Reinforcement Learning Framework for Autonomous Underwater Navigation
By: Zamirddine Mari, Mohamad Motasem Nawaf, Pierre Drap
Potential Business Impact:
Teaches robots to swim safely underwater.
Autonomous navigation in underwater environments remains a major challenge due to the absence of GPS, degraded visibility, and the presence of submerged obstacles. This article investigates these issues through the case of the BlueROV2, an open platform widely used for scientific experimentation. We propose a deep reinforcement learning approach based on the Proximal Policy Optimization (PPO) algorithm, using an observation space that combines target-oriented navigation information, a virtual occupancy grid, and ray-casting along the boundaries of the operational area. The learned policy is compared against a reference deterministic kinematic planner, the Dynamic Window Approach (DWA), commonly employed as a robust baseline for obstacle avoidance. The evaluation is conducted in a realistic simulation environment and complemented by validation on a physical BlueROV2 supervised by a 3D digital twin of the test site, helping to reduce risks associated with real-world experimentation. The results show that the PPO policy consistently outperforms DWA in highly cluttered environments, notably thanks to better local adaptation and reduced collisions. Finally, the experiments demonstrate the transferability of the learned behavior from simulation to the real world, confirming the relevance of deep RL for autonomous navigation in underwater robotics.
Similar Papers
Deep RL-based Autonomous Navigation of Micro Aerial Vehicles (MAVs) in a complex GPS-denied Indoor Environment
Robotics
Drones fly themselves indoors, faster and smarter.
Autonomous UAV Flight Navigation in Confined Spaces: A Reinforcement Learning Approach
Robotics
Drones learn to fly safely in dark tunnels.
Navigation in a Three-Dimensional Urban Flow using Deep Reinforcement Learning
Artificial Intelligence
Drones fly safely through windy cities.