Reinforcement Learning with Multi-Step Lookahead Information Via Adaptive Batching
By: Nadav Merlis
We study tabular reinforcement learning problems with multiple steps of lookahead information. Before acting, the learner observes $\ell$ steps of future transition and reward realizations: the exact state the agent would reach and the rewards it would collect under any possible course of action. While it has been shown that such information can drastically boost the value, finding the optimal policy is NP-hard, and it is common to apply one of two tractable heuristics: processing the lookahead in chunks of predefined sizes ('fixed batching policies'), and model predictive control. We first illustrate the problems with these two approaches and propose utilizing the lookahead in adaptive (state-dependent) batches; we refer to such policies as adaptive batching policies (ABPs). We derive the optimal Bellman equations for these strategies and design an optimistic regret-minimizing algorithm that enables learning the optimal ABP when interacting with unknown environments. Our regret bounds are order-optimal up to a potential factor of the lookahead horizon $\ell$, which can usually be considered a small constant.
Similar Papers
On the hardness of RL with Lookahead
Machine Learning (Stat)
Lets computers plan ahead to make better choices.
Look Before Leap: Look-Ahead Planning with Uncertainty in Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn faster and better.
Beyond Single-Step Updates: Reinforcement Learning of Heuristics with Limited-Horizon Search
Artificial Intelligence
Finds shortest paths faster using smarter guessing.