Score: 0

Performative Risk Control: Calibrating Models for Reliable Deployment under Performativity

Published: May 30, 2025 | arXiv ID: 2505.24097v1

By: Victor Li , Baiting Chen , Yuzhen Mao and more

Potential Business Impact:

Helps AI make safer choices when its predictions change outcomes.

Business Areas:
Risk Management Professional Services

Calibrating blackbox machine learning models to achieve risk control is crucial to ensure reliable decision-making. A rich line of literature has been studying how to calibrate a model so that its predictions satisfy explicit finite-sample statistical guarantees under a fixed, static, and unknown data-generating distribution. However, prediction-supported decisions may influence the outcome they aim to predict, a phenomenon named performativity of predictions, which is commonly seen in social science and economics. In this paper, we introduce Performative Risk Control, a framework to calibrate models to achieve risk control under performativity with provable theoretical guarantees. Specifically, we provide an iteratively refined calibration process, where we ensure the predictions are improved and risk-controlled throughout the process. We also study different types of risk measures and choices of tail bounds. Lastly, we demonstrate the effectiveness of our framework by numerical experiments on the task of predicting credit default risk. To the best of our knowledge, this work is the first one to study statistically rigorous risk control under performativity, which will serve as an important safeguard against a wide range of strategic manipulation in decision-making processes.

Page Count
28 pages

Category
Statistics:
Machine Learning (Stat)