COLSON: Controllable Learning-Based Social Navigation via Diffusion-Based Reinforcement Learning
By: Yuki Tomita , Kohei Matsumoto , Yuki Hyodo and more
Potential Business Impact:
Helps robots move safely around people.
Mobile robot navigation in dynamic environments with pedestrian traffic is a key challenge in the development of autonomous mobile service robots. Recently, deep reinforcement learning-based methods have been actively studied and have outperformed traditional rule-based approaches owing to their optimization capabilities. Among these, methods that assume a continuous action space typically rely on a Gaussian distribution assumption, which limits the flexibility of generated actions. Meanwhile, the application of diffusion models to reinforcement learning has advanced, allowing for more flexible action distributions compared with Gaussian distribution-based approaches. In this study, we applied a diffusion-based reinforcement learning approach to social navigation and validated its effectiveness. Furthermore, by leveraging the characteristics of diffusion models, we propose an extension that enables post-training action smoothing and adaptation to static obstacle scenarios not considered during the training steps.
Similar Papers
Steering Your Diffusion Policy with Latent Space Reinforcement Learning
Robotics
Robots learn to improve by themselves.
A Hybrid Approach to Indoor Social Navigation: Integrating Reactive Local Planning and Proactive Global Planning
Robotics
Robot learns to walk through crowds safely.
A Comparative Study of Human Motion Models in Reinforcement Learning Algorithms for Social Robot Navigation
Human-Computer Interaction
Helps robots safely walk through crowds.