Score: 1

Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning

Published: July 6, 2025 | arXiv ID: 2507.04406v1

By: Uri Sherman, Tomer Koren, Yishay Mansour

Potential Business Impact:

Teaches computers to learn the best way to do things.

Business Areas:
A/B Testing Data and Analytics

We study reinforcement learning (RL) in the agnostic policy learning setting, where the goal is to find a policy whose performance is competitive with the best policy in a given class of interest $\Pi$ -- crucially, without assuming that $\Pi$ contains the optimal policy. We propose a general policy learning framework that reduces this problem to first-order optimization in a non-Euclidean space, leading to new algorithms as well as shedding light on the convergence properties of existing ones. Specifically, under the assumption that $\Pi$ is convex and satisfies a variational gradient dominance (VGD) condition -- an assumption known to be strictly weaker than more standard completeness and coverability conditions -- we obtain sample complexity upper bounds for three policy learning algorithms: \emph{(i)} Steepest Descent Policy Optimization, derived from a constrained steepest descent method for non-convex optimization; \emph{(ii)} the classical Conservative Policy Iteration algorithm \citep{kakade2002approximately} reinterpreted through the lens of the Frank-Wolfe method, which leads to improved convergence results; and \emph{(iii)} an on-policy instantiation of the well-studied Policy Mirror Descent algorithm. Finally, we empirically evaluate the VGD condition across several standard environments, demonstrating the practical relevance of our key assumption.

Country of Origin
🇮🇱 Israel

Repos / Data Links

Page Count
43 pages

Category
Computer Science:
Machine Learning (CS)