Beyond Single-Step Updates: Reinforcement Learning of Heuristics with Limited-Horizon Search
By: Gal Hadar, Forest Agostinelli, Shahaf S. Shperberg
Potential Business Impact:
Finds shortest paths faster using smarter guessing.
Many sequential decision-making problems can be formulated as shortest-path problems, where the objective is to reach a goal state from a given starting state. Heuristic search is a standard approach for solving such problems, relying on a heuristic function to estimate the cost to the goal from any given state. Recent approaches leverage reinforcement learning to learn heuristics by applying deep approximate value iteration. These methods typically rely on single-step Bellman updates, where the heuristic of a state is updated based on its best neighbor and the corresponding edge cost. This work proposes a generalized approach that enhances both state sampling and heuristic updates by performing limited-horizon searches and updating each state's heuristic based on the shortest path to the search frontier, incorporating both edge costs and the heuristic values of frontier states.
Similar Papers
Heuristics for Combinatorial Optimization via Value-based Reinforcement Learning: A Unified Framework and Analysis
Machine Learning (Stat)
Helps computers solve hard puzzles faster and better.
Bidirectional Bounded-Suboptimal Heuristic Search with Consistent Heuristics
Artificial Intelligence
Finds faster paths with a small error.
Theoretical Barriers in Bellman-Based Reinforcement Learning
Machine Learning (CS)
Makes AI learn better by not missing important clues.