Score: 0

Reliable Policy Iteration: Performance Robustness Across Architecture and Environment Perturbations

Published: December 12, 2025 | arXiv ID: 2512.12088v1

By: S. R. Eshwar , Aniruddha Mukherjee , Kintan Saha and more

In a recent work, we proposed Reliable Policy Iteration (RPI), that restores policy iteration's monotonicity-of-value-estimates property to the function approximation setting. Here, we assess the robustness of RPI's empirical performance on two classical control tasks -- CartPole and Inverted Pendulum -- under changes to neural network and environmental parameters. Relative to DQN, Double DQN, DDPG, TD3, and PPO, RPI reaches near-optimal performance early and sustains this policy as training proceeds. Because deep RL methods are often hampered by sample inefficiency, training instability, and hyperparameter sensitivity, our results highlight RPI's promise as a more reliable alternative.

Category
Computer Science:
Artificial Intelligence