Bellman Calibration for V-Learning in Offline Reinforcement Learning
By: Lars van der Laan, Nathan Kallus
We introduce Iterated Bellman Calibration, a simple, model-agnostic, post-hoc procedure for calibrating off-policy value predictions in infinite-horizon Markov decision processes. Bellman calibration requires that states with similar predicted long-term returns exhibit one-step returns consistent with the Bellman equation under the target policy. We adapt classical histogram and isotonic calibration to the dynamic, counterfactual setting by repeatedly regressing fitted Bellman targets onto a model's predictions, using a doubly robust pseudo-outcome to handle off-policy data. This yields a one-dimensional fitted value iteration scheme that can be applied to any value estimator. Our analysis provides finite-sample guarantees for both calibration and prediction under weak assumptions, and critically, without requiring Bellman completeness or realizability.
Similar Papers
Reinforcement Learning with Imperfect Transition Predictions: A Bellman-Jensen Approach
Machine Learning (CS)
Helps computers make better choices with future guesses.
First-order Sobolev Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn faster and more reliably.
Operator Models for Continuous-Time Offline Reinforcement Learning
Machine Learning (Stat)
Teaches computers to learn from past actions safely.