PAC Learnability in the Presence of Performativity
By: Ivan Kirev, Lyuben Baltadzhiev, Nikola Konstantinov
Potential Business Impact:
Helps AI learn even when things change.
Following the wide-spread adoption of machine learning models in real-world applications, the phenomenon of performativity, i.e. model-dependent shifts in the test distribution, becomes increasingly prevalent. Unfortunately, since models are usually trained solely based on samples from the original (unshifted) distribution, this performative shift may lead to decreased test-time performance. In this paper, we study the question of whether and when performative binary classification problems are learnable, via the lens of the classic PAC (Probably Approximately Correct) learning framework. We motivate several performative scenarios, accounting in particular for linear shifts in the label distribution, as well as for more general changes in both the labels and the features. We construct a performative empirical risk function, which depends only on data from the original distribution and on the type performative effect, and is yet an unbiased estimate of the true risk of a classifier on the shifted distribution. Minimizing this notion of performative risk allows us to show that any PAC-learnable hypothesis space in the standard binary classification setting remains PAC-learnable for the considered performative scenarios. We also conduct an extensive experimental evaluation of our performative risk minimization method and showcase benefits on synthetic and real data.
Similar Papers
Nonlinear Performative Prediction
Machine Learning (CS)
Makes smart systems learn without changing their own rules.
Optimal Regularization for Performative Learning
Machine Learning (CS)
Teaches computers to learn from changing information.
PAC Reasoning: Controlling the Performance Loss for Efficient Reasoning
Artificial Intelligence
Makes smart computers solve problems faster, with fewer mistakes.