A Minimax-MDP Framework with Future-imposed Conditions for Learning-augmented Problems
By: Xin Chen, Yuze Chen, Yuan Zhou
Potential Business Impact:
Helps make better choices with improving guesses.
We study a class of sequential decision-making problems with augmented predictions, potentially provided by a machine learning algorithm. In this setting, the decision-maker receives prediction intervals for unknown parameters that become progressively refined over time, and seeks decisions that are competitive with the hindsight optimal under all possible realizations of both parameters and predictions. We propose a minimax Markov Decision Process (minimax-MDP) framework, where the system state consists of an adversarially evolving environment state and an internal state controlled by the decision-maker. We introduce a set of future-imposed conditions that characterize the feasibility of minimax-MDPs and enable the design of efficient, often closed-form, robustly competitive policies. We illustrate the framework through three applications: multi-period inventory ordering with refining demand predictions, resource allocation with uncertain utility functions, and a multi-phase extension of the minimax-MDP applied to the inventory problem with time-varying ordering costs. Our results provide a tractable and versatile approach to robust online decision-making under predictive uncertainty.
Similar Papers
Bayesian learning of the optimal action-value function in a Markov decision process
Machine Learning (Stat)
Helps robots learn best actions with less guessing.
Rollout-Based Approximate Dynamic Programming for MDPs with Information-Theoretic Constraints
Systems and Control
Helps computers make better choices with less data.
Deontically Constrained Policy Improvement in Reinforcement Learning Agents
Artificial Intelligence
Teaches robots to do good things, not bad.