Shared Control of Holonomic Wheelchairs through Reinforcement Learning
By: Jannis Bähler, Diego Paez-Granados, Jorge Peña-Queralta
Potential Business Impact:
Makes wheelchairs smarter for easier, safer rides.
Smart electric wheelchairs can improve user experience by supporting the driver with shared control. State-of-the-art work showed the potential of shared control in improving safety in navigation for non-holonomic robots. However, for holonomic systems, current approaches often lead to unintuitive behavior for the user and fail to utilize the full potential of omnidirectional driving. Therefore, we propose a reinforcement learning-based method, which takes a 2D user input and outputs a 3D motion while ensuring user comfort and reducing cognitive load on the driver. Our approach is trained in Isaac Gym and tested in simulation in Gazebo. We compare different RL agent architectures and reward functions based on metrics considering cognitive load and user comfort. We show that our method ensures collision-free navigation while smartly orienting the wheelchair and showing better or competitive smoothness compared to a previous non-learning-based method. We further perform a sim-to-real transfer and demonstrate, to the best of our knowledge, the first real-world implementation of RL-based shared control for an omnidirectional mobility platform.
Similar Papers
Vision-based Goal-Reaching Control for Mobile Robots Using a Hierarchical Learning Framework
Robotics
Keeps big robots safe while they learn.
Human-Centered Shared Autonomy for Motor Planning, Learning, and Control Applications
Human-Computer Interaction
Helps robots and people work together better.
A Human-Sensitive Controller: Adapting to Human Ergonomics and Physical Constraints via Reinforcement Learning
Robotics
Helps injured workers do jobs safely again.