Score: 0

Is Exploration or Optimization the Problem for Deep Reinforcement Learning?

Published: August 2, 2025 | arXiv ID: 2508.01329v1

By: Glen Berseth

Potential Business Impact:

Finds how well computer learning can improve.

In the era of deep reinforcement learning, making progress is more complex, as the collected experience must be compressed into a deep model for future exploitation and sampling. Many papers have shown that training a deep learning policy under the changing state and action distribution leads to sub-optimal performance, or even collapse. This naturally leads to the concern that even if the community creates improved exploration algorithms or reward objectives, will those improvements fall on the \textit{deaf ears} of optimization difficulties. This work proposes a new \textit{practical} sub-optimality estimator to determine optimization limitations of deep reinforcement learning algorithms. Through experiments across environments and RL algorithms, it is shown that the difference between the best experience generated is 2-3$\times$ better than the policies' learned performance. This large difference indicates that deep RL methods only exploit half of the good experience they generate.

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)