Investigating Adaptive Tuning of Assistive Exoskeletons Using Offline Reinforcement Learning: Challenges and Insights
By: Yasin Findik, Christopher Coco, Reza Azadeh
Potential Business Impact:
Helps robot arms move better with less setup.
Assistive exoskeletons have shown great potential in enhancing mobility for individuals with motor impairments, yet their effectiveness relies on precise parameter tuning for personalized assistance. In this study, we investigate the potential of offline reinforcement learning for optimizing effort thresholds in upper-limb assistive exoskeletons, aiming to reduce reliance on manual calibration. Specifically, we frame the problem as a multi-agent system where separate agents optimize biceps and triceps effort thresholds, enabling a more adaptive and data-driven approach to exoskeleton control. Mixed Q-Functionals (MQF) is employed to efficiently handle continuous action spaces while leveraging pre-collected data, thereby mitigating the risks associated with real-time exploration. Experiments were conducted using the MyoPro 2 exoskeleton across two distinct tasks involving horizontal and vertical arm movements. Our results indicate that the proposed approach can dynamically adjust threshold values based on learned patterns, potentially improving user interaction and control, though performance evaluation remains challenging due to dataset limitations.
Similar Papers
Adaptive Torque Control of Exoskeletons under Spasticity Conditions via Reinforcement Learning
Robotics
Helps robots safely move stiff legs.
Benchmarking Offline Reinforcement Learning for Emotion-Adaptive Social Robotics
Robotics
Teaches robots to understand feelings from old data.
Motion Adaptation Across Users and Tasks for Exoskeletons via Meta-Learning
Robotics
Helps robots learn to help people move better.