Good-for-MDP State Reduction for Stochastic LTL Planning
By: Christoph Weinhuber , Giuseppe De Giacomo , Yong Li and more
Potential Business Impact:
Makes robots smarter at achieving goals.
We study stochastic planning problems in Markov Decision Processes (MDPs) with goals specified in Linear Temporal Logic (LTL). The state-of-the-art approach transforms LTL formulas into good-for-MDP (GFM) automata, which feature a restricted form of nondeterminism. These automata are then composed with the MDP, allowing the agent to resolve the nondeterminism during policy synthesis. A major factor affecting the scalability of this approach is the size of the generated automata. In this paper, we propose a novel GFM state-space reduction technique that significantly reduces the number of automata states. Our method employs a sophisticated chain of transformations, leveraging recent advances in good-for-games minimisation developed for adversarial settings. In addition to our theoretical contributions, we present empirical results demonstrating the practical effectiveness of our state-reduction technique. Furthermore, we introduce a direct construction method for formulas of the form $\mathsf{G}\mathsf{F}\varphi$, where $\varphi$ is a co-safety formula. This construction is provably single-exponential in the worst case, in contrast to the general doubly-exponential complexity. Our experiments confirm the scalability advantages of this specialised construction.
Similar Papers
Good-for-MDP State Reduction for Stochastic LTL Planning
Formal Languages and Automata Theory
Makes robots plan better and faster.
Soft state reduction of fuzzy automata over residuated lattices
Formal Languages and Automata Theory
Makes fuzzy computer programs smaller and faster.
Automated Generation of MDPs Using Logic Programming and LLMs for Robotic Applications
Robotics
Builds robots that learn tasks from simple instructions.