A Primer on SO(3) Action Representations in Deep Reinforcement Learning
By: Martin Schuck, Sherif Samy, Angela P. Schoellig
Potential Business Impact:
Helps robots move and turn more smoothly.
Many robotic control tasks require policies to act on orientations, yet the geometry of SO(3) makes this nontrivial. Because SO(3) admits no global, smooth, minimal parameterization, common representations such as Euler angles, quaternions, rotation matrices, and Lie algebra coordinates introduce distinct constraints and failure modes. While these trade-offs are well studied for supervised learning, their implications for actions in reinforcement learning remain unclear. We systematically evaluate SO(3) action representations across three standard continuous control algorithms, PPO, SAC, and TD3, under dense and sparse rewards. We compare how representations shape exploration, interact with entropy regularization, and affect training stability through empirical studies and analyze the implications of different projections for obtaining valid rotations from Euclidean network outputs. Across a suite of robotics benchmarks, we quantify the practical impact of these choices and distill simple, implementation-ready guidelines for selecting and using rotation actions. Our results highlight that representation-induced geometry strongly influences exploration and optimization and show that representing actions as tangent vectors in the local frame yields the most reliable results across algorithms.
Similar Papers
Learning and Optimization with 3D Orientations
Robotics
Helps robots understand and move in 3D space.
Decentralized Swarm Control via SO(3) Embeddings for 3D Trajectories
Robotics
Robots move together without bumping into each other.
Orientation Learning and Adaptation towards Simultaneous Incorporation of Multiple Local Constraints
Robotics
Helps robots learn smooth, fast movements.