Score: 1

Compute-Optimal Scaling for Value-Based Deep RL

Published: August 20, 2025 | arXiv ID: 2508.14881v2

By: Preston Fu , Oleh Rybkin , Zhiyuan Zhou and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Teaches robots to learn faster with less effort.

Business Areas:
Big Data Data and Analytics

As models grow larger and training them becomes expensive, it becomes increasingly important to scale training recipes not just to larger models and more data, but to do so in a compute-optimal manner that extracts maximal performance per unit of compute. While such scaling has been well studied for language modeling, reinforcement learning (RL) has received less attention in this regard. In this paper, we investigate compute scaling for online, value-based deep RL. These methods present two primary axes for compute allocation: model capacity and the update-to-data (UTD) ratio. Given a fixed compute budget, we ask: how should resources be partitioned across these axes to maximize sample efficiency? Our analysis reveals a nuanced interplay between model size, batch size, and UTD. In particular, we identify a phenomenon we call TD-overfitting: increasing the batch quickly harms Q-function accuracy for small models, but this effect is absent in large models, enabling effective use of large batch size at scale. We provide a mental model for understanding this phenomenon and build guidelines for choosing batch size and UTD to optimize compute usage. Our findings provide a grounded starting point for compute-optimal scaling in deep RL, mirroring studies in supervised learning but adapted to TD learning.

Country of Origin
🇺🇸 United States

Page Count
42 pages

Category
Computer Science:
Machine Learning (CS)