Generalisation in Multitask Fitted Q-Iteration and Offline Q-learning
By: Kausthubh Manda, Raghuram Bharadwaj Diddigi
We study offline multitask reinforcement learning in settings where multiple tasks share a low-rank representation of their action-value functions. In this regime, a learner is provided with fixed datasets collected from several related tasks, without access to further online interaction, and seeks to exploit shared structure to improve statistical efficiency and generalization. We analyze a multitask variant of fitted Q-iteration that jointly learns a shared representation and task-specific value functions via Bellman error minimization on offline data. Under standard realizability and coverage assumptions commonly used in offline reinforcement learning, we establish finite-sample generalization guarantees for the learned value functions. Our analysis explicitly characterizes how pooling data across tasks improves estimation accuracy, yielding a $1/\sqrt{nT}$ dependence on the total number of samples across tasks, while retaining the usual dependence on the horizon and concentrability coefficients arising from distribution shift. In addition, we consider a downstream offline setting in which a new task shares the same underlying representation as the upstream tasks. We study how reusing the representation learned during the multitask phase affects value estimation for this new task, and show that it can reduce the effective complexity of downstream learning relative to learning from scratch. Together, our results clarify the role of shared representations in multitask offline Q-learning and provide theoretical insight into when and how multitask structure can improve generalization in model-free, value-based reinforcement learning.
Similar Papers
A Tensor Low-Rank Approximation for Value Functions in Multi-Task Reinforcement Learning
Machine Learning (CS)
Teaches robots many skills with less practice.
Operator-Based Generalization Bound for Deep Learning: Insights on Multi-Task Learning
Machine Learning (CS)
Helps computers learn many tasks better.
Towards Understanding the Benefit of Multitask Representation Learning in Decision Process
Machine Learning (CS)
Teaches computers to learn many things faster.