Score: 0

Convergence and stability of Q-learning in Hierarchical Reinforcement Learning

Published: November 21, 2025 | arXiv ID: 2511.17351v1

By: Massimiliano Manenti, Andrea Iannelli

Potential Business Impact:

Teaches computers to learn complex tasks faster.

Business Areas:
Quantum Computing Science and Engineering

Hierarchical Reinforcement Learning promises, among other benefits, to efficiently capture and utilize the temporal structure of a decision-making problem and to enhance continual learning capabilities, but theoretical guarantees lag behind practice. In this paper, we propose a Feudal Q-learning scheme and investigate under which conditions its coupled updates converge and are stable. By leveraging the theory of Stochastic Approximation and the ODE method, we present a theorem stating the convergence and stability properties of Feudal Q-learning. This provides a principled convergence and stability analysis tailored to Feudal RL. Moreover, we show that the updates converge to a point that can be interpreted as an equilibrium of a suitably defined game, opening the door to game-theoretic approaches to Hierarchical RL. Lastly, experiments based on the Feudal Q-learning algorithm support the outcomes anticipated by theory.

Country of Origin
🇩🇪 Germany

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)