Score: 1

Optimal Regularization for Performative Learning

Published: October 14, 2025 | arXiv ID: 2510.12249v1

By: Edwige Cyffers, Alireza Mirrokni, Marco Mondelli

Potential Business Impact:

Teaches computers to learn from changing information.

Business Areas:
Personalization Commerce and Shopping

In performative learning, the data distribution reacts to the deployed model - for example, because strategic users adapt their features to game it - which creates a more complex dynamic than in classical supervised learning. One should thus not only optimize the model for the current data but also take into account that the model might steer the distribution in a new direction, without knowing the exact nature of the potential shift. We explore how regularization can help cope with performative effects by studying its impact in high-dimensional ridge regression. We show that, while performative effects worsen the test risk in the population setting, they can be beneficial in the over-parameterized regime where the number of features exceeds the number of samples. We show that the optimal regularization scales with the overall strength of the performative effect, making it possible to set the regularization in anticipation of this effect. We illustrate this finding through empirical evaluations of the optimal regularization parameter on both synthetic and real-world datasets.

Country of Origin
🇦🇹 Austria

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)