On a Reinforcement Learning Methodology for Epidemic Control, with application to COVID-19
By: Giacomo Iannucci , Petros Barmpounakis , Alexandros Beskos and more
Potential Business Impact:
Helps leaders decide how to stop sickness faster.
This paper presents a real time, data driven decision support framework for epidemic control. We combine a compartmental epidemic model with sequential Bayesian inference and reinforcement learning (RL) controllers that adaptively choose intervention levels to balance disease burden, such as intensive care unit (ICU) load, against socio economic costs. We construct a context specific cost function using empirical experiments and expert feedback. We study two RL policies: an ICU threshold rule computed via Monte Carlo grid search, and a policy based on a posterior averaged Q learning agent. We validate the framework by fitting the epidemic model to publicly available ICU occupancy data from the COVID 19 pandemic in England and then generating counterfactual roll out scenarios under each RL controller, which allows us to compare the RL policies to the historical government strategy. Over a 300 day period and for a range of cost parameters, both controllers substantially reduce ICU burden relative to the observed interventions, illustrating how Bayesian sequential learning combined with RL can support the design of epidemic control policies.
Similar Papers
Optimization of Infectious Disease Intervention Measures Based on Reinforcement Learning -- Empirical analysis based on UK COVID-19 epidemic data
Machine Learning (CS)
Helps stop sickness spread and save money.
Learning Pareto-Optimal Pandemic Intervention Policies with MORL
Machine Learning (CS)
Finds best ways to stop sickness without hurting jobs.
Optimising pandemic response through vaccination strategies using neural networks
Applications
Helps stop sickness while saving money.