Score: 1

Value Iteration with Guessing for Markov Chains and Markov Decision Processes

Published: May 10, 2025 | arXiv ID: 2505.06769v1

By: Krishnendu Chatterjee , Mahdi JafariRaviz , Raimundo Saona and more

Potential Business Impact:

Makes smart machines learn faster and better.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Two standard models for probabilistic systems are Markov chains (MCs) and Markov decision processes (MDPs). Classic objectives for such probabilistic models for control and planning problems are reachability and stochastic shortest path. The widely studied algorithmic approach for these problems is the Value Iteration (VI) algorithm which iteratively applies local updates called Bellman updates. There are many practical approaches for VI in the literature but they all require exponentially many Bellman updates for MCs in the worst case. A preprocessing step is an algorithm that is discrete, graph-theoretical, and requires linear space. An important open question is whether, after a polynomial-time preprocessing, VI can be achieved with sub-exponentially many Bellman updates. In this work, we present a new approach for VI based on guessing values. Our theoretical contributions are twofold. First, for MCs, we present an almost-linear-time preprocessing algorithm after which, along with guessing values, VI requires only subexponentially many Bellman updates. Second, we present an improved analysis of the speed of convergence of VI for MDPs. Finally, we present a practical algorithm for MDPs based on our new approach. Experimental results show that our approach provides a considerable improvement over existing VI-based approaches on several benchmark examples from the literature.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¦πŸ‡Ή Austria, United States

Page Count
48 pages

Category
Computer Science:
Artificial Intelligence