Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning
By: Uri Sherman, Tomer Koren, Yishay Mansour
Potential Business Impact:
Teaches computers to learn the best way to do things.
We study reinforcement learning (RL) in the agnostic policy learning setting, where the goal is to find a policy whose performance is competitive with the best policy in a given class of interest $\Pi$ -- crucially, without assuming that $\Pi$ contains the optimal policy. We propose a general policy learning framework that reduces this problem to first-order optimization in a non-Euclidean space, leading to new algorithms as well as shedding light on the convergence properties of existing ones. Specifically, under the assumption that $\Pi$ is convex and satisfies a variational gradient dominance (VGD) condition -- an assumption known to be strictly weaker than more standard completeness and coverability conditions -- we obtain sample complexity upper bounds for three policy learning algorithms: \emph{(i)} Steepest Descent Policy Optimization, derived from a constrained steepest descent method for non-convex optimization; \emph{(ii)} the classical Conservative Policy Iteration algorithm \citep{kakade2002approximately} reinterpreted through the lens of the Frank-Wolfe method, which leads to improved convergence results; and \emph{(iii)} an on-policy instantiation of the well-studied Policy Mirror Descent algorithm. Finally, we empirically evaluate the VGD condition across several standard environments, demonstrating the practical relevance of our key assumption.
Similar Papers
Agnostic Reinforcement Learning: Foundations and Algorithms
Machine Learning (CS)
Teaches computers to learn from mistakes better.
Convergence Guarantees of Model-free Policy Gradient Methods for LQR with Stochastic Data
Systems and Control
Makes smart robots learn better with messy data.
The Role of Environment Access in Agnostic Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn from mistakes better.