Score: 0

Multi-Task Reinforcement Learning Enables Parameter Scaling

Published: March 7, 2025 | arXiv ID: 2503.05126v3

By: Reginald McLean , Evangelos Chatzaroulas , Jordan Terry and more

Potential Business Impact:

Makes one robot learn many jobs better by growing bigger.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Multi-task reinforcement learning (MTRL) aims to endow a single agent with the ability to perform well on multiple tasks. Recent works have focused on developing novel sophisticated architectures to improve performance, often resulting in larger models; it is unclear, however, whether the performance gains are a consequence of the architecture design itself or the extra parameters. We argue that gains are mostly due to scale by demonstrating that naively scaling up a simple MTRL baseline to match parameter counts outperforms the more sophisticated architectures, and these gains benefit most from scaling the critic over the actor. Additionally, we explore the training stability advantages that come with task diversity, demonstrating that increasing the number of tasks can help mitigate plasticity loss. Our findings suggest that MTRL's simultaneous training across multiple tasks provides a natural framework for beneficial parameter scaling in reinforcement learning, challenging the need for complex architectural innovations.

Country of Origin
🇨🇦 Canada

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)