Optimal Regularization for Performative Learning
By: Edwige Cyffers, Alireza Mirrokni, Marco Mondelli
Potential Business Impact:
Teaches computers to learn from changing information.
In performative learning, the data distribution reacts to the deployed model - for example, because strategic users adapt their features to game it - which creates a more complex dynamic than in classical supervised learning. One should thus not only optimize the model for the current data but also take into account that the model might steer the distribution in a new direction, without knowing the exact nature of the potential shift. We explore how regularization can help cope with performative effects by studying its impact in high-dimensional ridge regression. We show that, while performative effects worsen the test risk in the population setting, they can be beneficial in the over-parameterized regime where the number of features exceeds the number of samples. We show that the optimal regularization scales with the overall strength of the performative effect, making it possible to set the regularization in anticipation of this effect. We illustrate this finding through empirical evaluations of the optimal regularization parameter on both synthetic and real-world datasets.
Similar Papers
Nonlinear Performative Prediction
Machine Learning (CS)
Makes smart systems learn without changing their own rules.
PAC Learnability in the Presence of Performativity
Machine Learning (Stat)
Helps AI learn even when things change.
Beyond Real Data: Synthetic Data through the Lens of Regularization
Machine Learning (Stat)
Finds best mix of fake and real data.