Partially Equivariant Reinforcement Learning in Symmetry-Breaking Environments
By: Junwoo Chang , Minwoo Park , Joohwan Seo and more
Potential Business Impact:
Teaches robots to learn faster, even with imperfect symmetry.
Group symmetries provide a powerful inductive bias for reinforcement learning (RL), enabling efficient generalization across symmetric states and actions via group-invariant Markov Decision Processes (MDPs). However, real-world environments almost never realize fully group-invariant MDPs; dynamics, actuation limits, and reward design usually break symmetries, often only locally. Under group-invariant Bellman backups for such cases, local symmetry-breaking introduces errors that propagate across the entire state-action space, resulting in global value estimation errors. To address this, we introduce Partially group-Invariant MDP (PI-MDP), which selectively applies group-invariant or standard Bellman backups depending on where symmetry holds. This framework mitigates error propagation from locally broken symmetries while maintaining the benefits of equivariance, thereby enhancing sample efficiency and generalizability. Building on this framework, we present practical RL algorithms -- Partially Equivariant (PE)-DQN for discrete control and PE-SAC for continuous control -- that combine the benefits of equivariance with robustness to symmetry-breaking. Experiments across Grid-World, locomotion, and manipulation benchmarks demonstrate that PE-DQN and PE-SAC significantly outperform baselines, highlighting the importance of selective symmetry exploitation for robust and sample-efficient RL.
Similar Papers
Multi-Group Equivariant Augmentation for Reinforcement Learning in Robot Manipulation
Robotics
Teaches robots to learn tasks faster.
Reinforcement Learning Using known Invariances
Machine Learning (CS)
Teaches robots faster by using their shape.
Symmetries in PAC-Bayesian Learning
Machine Learning (CS)
Makes AI learn better from messy, shifted pictures.