Score: 2

Partially Equivariant Reinforcement Learning in Symmetry-Breaking Environments

Published: November 30, 2025 | arXiv ID: 2512.00915v1

By: Junwoo Chang , Minwoo Park , Joohwan Seo and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Teaches robots to learn faster, even with imperfect symmetry.

Business Areas:
A/B Testing Data and Analytics

Group symmetries provide a powerful inductive bias for reinforcement learning (RL), enabling efficient generalization across symmetric states and actions via group-invariant Markov Decision Processes (MDPs). However, real-world environments almost never realize fully group-invariant MDPs; dynamics, actuation limits, and reward design usually break symmetries, often only locally. Under group-invariant Bellman backups for such cases, local symmetry-breaking introduces errors that propagate across the entire state-action space, resulting in global value estimation errors. To address this, we introduce Partially group-Invariant MDP (PI-MDP), which selectively applies group-invariant or standard Bellman backups depending on where symmetry holds. This framework mitigates error propagation from locally broken symmetries while maintaining the benefits of equivariance, thereby enhancing sample efficiency and generalizability. Building on this framework, we present practical RL algorithms -- Partially Equivariant (PE)-DQN for discrete control and PE-SAC for continuous control -- that combine the benefits of equivariance with robustness to symmetry-breaking. Experiments across Grid-World, locomotion, and manipulation benchmarks demonstrate that PE-DQN and PE-SAC significantly outperform baselines, highlighting the importance of selective symmetry exploitation for robust and sample-efficient RL.

Country of Origin
πŸ‡°πŸ‡· πŸ‡ΊπŸ‡Έ Korea, Republic of, United States

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)