Multi-Group Equivariant Augmentation for Reinforcement Learning in Robot Manipulation
By: Hongbin Lin, Juan Rojas, Kwok Wai Samuel Au
Potential Business Impact:
Teaches robots to learn tasks faster.
Sampling efficiency is critical for deploying visuomotor learning in real-world robotic manipulation. While task symmetry has emerged as a promising inductive bias to improve efficiency, most prior work is limited to isometric symmetries -- applying the same group transformation to all task objects across all timesteps. In this work, we explore non-isometric symmetries, applying multiple independent group transformations across spatial and temporal dimensions to relax these constraints. We introduce a novel formulation of the partially observable Markov decision process (POMDP) that incorporates the non-isometric symmetry structures, and propose a simple yet effective data augmentation method, Multi-Group Equivariance Augmentation (MEA). We integrate MEA with offline reinforcement learning to enhance sampling efficiency, and introduce a voxel-based visual representation that preserves translational equivariance. Extensive simulation and real-robot experiments across two manipulation domains demonstrate the effectiveness of our approach.
Similar Papers
Partially Equivariant Reinforcement Learning in Symmetry-Breaking Environments
Machine Learning (CS)
Teaches robots to learn faster, even with imperfect symmetry.
Eq.Bot: Enhance Robotic Manipulation Learning via Group Equivariant Canonicalization
Robotics
Robots learn to move objects more accurately.
SE(3)-Equivariant Robot Learning and Control: A Tutorial Survey
Robotics
Robots learn faster by understanding shapes.