Reliable Policy Iteration: Performance Robustness Across Architecture and Environment Perturbations
By: S. R. Eshwar , Aniruddha Mukherjee , Kintan Saha and more
In a recent work, we proposed Reliable Policy Iteration (RPI), that restores policy iteration's monotonicity-of-value-estimates property to the function approximation setting. Here, we assess the robustness of RPI's empirical performance on two classical control tasks -- CartPole and Inverted Pendulum -- under changes to neural network and environmental parameters. Relative to DQN, Double DQN, DDPG, TD3, and PPO, RPI reaches near-optimal performance early and sustains this policy as training proceeds. Because deep RL methods are often hampered by sample inefficiency, training instability, and hyperparameter sensitivity, our results highlight RPI's promise as a more reliable alternative.
Similar Papers
Reliable Critics: Monotonic Improvement and Convergence Guarantees for Reinforcement Learning
Machine Learning (CS)
Makes learning robots more reliable and predictable.
Robustness of Online Identification-based Policy Iteration to Noisy Data
Systems and Control
Teaches robots to learn and improve tasks.
d-TreeRPO: Towards More Reliable Policy Optimization for Diffusion Language Models
Computation and Language
Helps AI solve math and logic puzzles better.