Score: 0

Beyond expected value: geometric mean optimization for long-term policy performance in reinforcement learning

Published: August 29, 2025 | arXiv ID: 2508.21443v1

By: Xinyi Sheng, Dominik Baumann

Potential Business Impact:

Teaches robots to learn better from every try.

Business Areas:
A/B Testing Data and Analytics

Reinforcement learning (RL) algorithms typically optimize the expected cumulative reward, i.e., the expected value of the sum of scalar rewards an agent receives over the course of a trajectory. The expected value averages the performance over an infinite number of trajectories. However, when deploying the agent in the real world, this ensemble average may be uninformative for the performance of individual trajectories. Thus, in many applications, optimizing the long-term performance of individual trajectories might be more desirable. In this work, we propose a novel RL algorithm that combines the standard ensemble average with the time-average growth rate, a measure for the long-term performance of individual trajectories. We first define the Bellman operator for the time-average growth rate. We then show that, under multiplicative reward dynamics, the geometric mean aligns with the time-average growth rate. To address more general and unknown reward dynamics, we propose a modified geometric mean with $N$-sliding window that captures the path-dependency as an estimator for the time-average growth rate. This estimator is embedded as a regularizer into the objective, forming a practical algorithm and enabling the policy to benefit from ensemble average and time-average simultaneously. We evaluate our algorithm in challenging simulations, where it outperforms conventional RL methods.

Country of Origin
🇫🇮 Finland

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)