Score: 2

ADARL: Adaptive Low-Rank Structures for Robust Policy Learning under Uncertainty

Published: October 13, 2025 | arXiv ID: 2510.11899v1

By: Chenliang Li , Junyu Leng , Jiaxiang Li and more

Potential Business Impact:

Makes robots learn safely even with mistakes.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Robust reinforcement learning (Robust RL) seeks to handle epistemic uncertainty in environment dynamics, but existing approaches often rely on nested min--max optimization, which is computationally expensive and yields overly conservative policies. We propose \textbf{Adaptive Rank Representation (AdaRL)}, a bi-level optimization framework that improves robustness by aligning policy complexity with the intrinsic dimension of the task. At the lower level, AdaRL performs policy optimization under fixed-rank constraints with dynamics sampled from a Wasserstein ball around a centroid model. At the upper level, it adaptively adjusts the rank to balance the bias--variance trade-off, projecting policy parameters onto a low-rank manifold. This design avoids solving adversarial worst-case dynamics while ensuring robustness without over-parameterization. Empirical results on MuJoCo continuous control benchmarks demonstrate that AdaRL not only consistently outperforms fixed-rank baselines (e.g., SAC) and state-of-the-art robust RL methods (e.g., RNAC, Parseval), but also converges toward the intrinsic rank of the underlying tasks. These results highlight that adaptive low-rank policy representations provide an efficient and principled alternative for robust RL under model uncertainty.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ United States, China

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)