Reinforcement Learning for Micro-Level Claims Reserving
By: Benjamin Avanzi , Ronald Richman , Bernard Wong and more
Potential Business Impact:
Teaches computers to guess claim costs better.
Outstanding claim liabilities are revised repeatedly as claims develop, yet most modern reserving models are trained as one-shot predictors and typically learn only from settled claims. We formulate individual claims reserving as a claim-level Markov decision process in which an agent sequentially updates outstanding claim liability (OCL) estimates over development, using continuous actions and a reward design that balances accuracy with stable reserve revisions. A key advantage of this reinforcement learning (RL) approach is that it can learn from all observed claim trajectories, including claims that remain open at valuation, thereby avoiding the reduced sample size and selection effects inherent in supervised methods trained on ultimate outcomes only. We also introduce practical components needed for actuarial use -- initialisation of new claims, temporally consistent tuning via a rolling-settlement scheme, and an importance-weighting mechanism to mitigate portfolio-level underestimation driven by the rarity of large claims. On CAS and SPLICE synthetic general insurance datasets, the proposed Soft Actor-Critic implementation delivers competitive claim-level accuracy and strong aggregate OCL performance, particularly for the immature claim segments that drive most of the liability.
Similar Papers
Adaptive Insurance Reserving with CVaR-Constrained Reinforcement Learning under Macroeconomic Regimes
Machine Learning (CS)
Helps insurance companies save money safely.
Continuous-Time Reinforcement Learning for Asset-Liability Management
Machine Learning (CS)
Learns best way to balance money over time.
Risk-sensitive Reinforcement Learning Based on Convex Scoring Functions
Mathematical Finance
Teaches computers to trade money safely and smartly.