When In Doubt, Abstain: The Impact of Abstention on Strategic Classification
By: Lina Alkarmi, Ziyuan Huang, Mingyan Liu
Potential Business Impact:
Stops people from tricking computer decisions.
Algorithmic decision making is increasingly prevalent, but often vulnerable to strategic manipulation by agents seeking a favorable outcome. Prior research has shown that classifier abstention (allowing a classifier to decline making a decision due to insufficient confidence) can significantly increase classifier accuracy. This paper studies abstention within a strategic classification context, exploring how its introduction impacts strategic agents' responses and how principals should optimally leverage it. We model this interaction as a Stackelberg game where a principal, acting as the classifier, first announces its decision policy, and then strategic agents, acting as followers, manipulate their features to receive a desired outcome. Here, we focus on binary classifiers where agents manipulate observable features rather than their true features, and show that optimal abstention ensures that the principal's utility (or loss) is no worse than in a non-abstention setting, even in the presence of strategic agents. We also show that beyond improving accuracy, abstention can also serve as a deterrent to manipulation, making it costlier for agents, especially those less qualified, to manipulate to achieve a positive outcome when manipulation costs are significant enough to affect agent behavior. These results highlight abstention as a valuable tool for reducing the negative effects of strategic behavior in algorithmic decision making systems.
Similar Papers
Policy Learning with Abstention
Machine Learning (CS)
Lets computers decide when to ask for help.
Bounded-Abstention Pairwise Learning to Rank
Machine Learning (CS)
Helps computers ask for help when unsure.
Interpretable and Fair Mechanisms for Abstaining Classifiers
Machine Learning (CS)
Makes AI fairer by letting it skip tough, unfair guesses.