Convergence and stability of Q-learning in Hierarchical Reinforcement Learning
By: Massimiliano Manenti, Andrea Iannelli
Potential Business Impact:
Teaches computers to learn complex tasks faster.
Hierarchical Reinforcement Learning promises, among other benefits, to efficiently capture and utilize the temporal structure of a decision-making problem and to enhance continual learning capabilities, but theoretical guarantees lag behind practice. In this paper, we propose a Feudal Q-learning scheme and investigate under which conditions its coupled updates converge and are stable. By leveraging the theory of Stochastic Approximation and the ODE method, we present a theorem stating the convergence and stability properties of Feudal Q-learning. This provides a principled convergence and stability analysis tailored to Feudal RL. Moreover, we show that the updates converge to a point that can be interpreted as an equilibrium of a suitably defined game, opening the door to game-theoretic approaches to Hierarchical RL. Lastly, experiments based on the Feudal Q-learning algorithm support the outcomes anticipated by theory.
Similar Papers
Towards Formalizing Reinforcement Learning Theory
Machine Learning (CS)
Proves learning programs can always improve with practice.
Stabilizing Reinforcement Learning with LLMs: Formulation and Practices
Machine Learning (CS)
Makes AI learn better and faster from mistakes.
Stabilizing Reinforcement Learning with LLMs: Formulation and Practices
Machine Learning (CS)
Makes AI learn better and faster.