Morphology-Aware Graph Reinforcement Learning for Tensegrity Robot Locomotion
By: Chi Zhang , Mingrui Li , Wenzhe Tong and more
Potential Business Impact:
Makes wobbly robots walk and turn better.
Tensegrity robots combine rigid rods and elastic cables, offering high resilience and deployability but posing major challenges for locomotion control due to their underactuated and highly coupled dynamics. This paper introduces a morphology-aware reinforcement learning framework that integrates a graph neural network (GNN) into the Soft Actor-Critic (SAC) algorithm. By representing the robot's physical topology as a graph, the proposed GNN-based policy captures coupling among components, enabling faster and more stable learning than conventional multilayer perceptron (MLP) policies. The method is validated on a physical 3-bar tensegrity robot across three locomotion primitives, including straight-line tracking and bidirectional turning. It shows superior sample efficiency, robustness to noise and stiffness variations, and improved trajectory accuracy. Notably, the learned policies transfer directly from simulation to hardware without fine-tuning, achieving stable real-world locomotion. These results demonstrate the advantages of incorporating structural priors into reinforcement learning for tensegrity robot control.
Similar Papers
Walk the Robot: Exploring Soft Robotic Morphological Communication driven by Spiking Neural Networks
Neural and Evolutionary Computing
Robot parts talk to each other using wiggles.
RoboBallet: Planning for Multi-Robot Reaching with Graph Neural Networks and Reinforcement Learning
Robotics
Robots learn to work together without crashing.
Surrogate compliance modeling enables reinforcement learned locomotion gaits for soft robots
Robotics
Robots change shape to walk better on land and water.