Policy Learning with Abstention
By: Ayush Sawarni , Jikai Jin , Justin Whitehouse and more
Potential Business Impact:
Lets computers decide when to ask for help.
Policy learning algorithms are widely used in areas such as personalized medicine and advertising to develop individualized treatment regimes. However, most methods force a decision even when predictions are uncertain, which is risky in high-stakes settings. We study policy learning with abstention, where a policy may defer to a safe default or an expert. When a policy abstains, it receives a small additive reward on top of the value of a random guess. We propose a two-stage learner that first identifies a set of near-optimal policies and then constructs an abstention rule from their disagreements. We establish fast O(1/n)-type regret guarantees when propensities are known, and extend these guarantees to the unknown-propensity case via a doubly robust (DR) objective. We further show that abstention is a versatile tool with direct applications to other core problems in policy learning: it yields improved guarantees under margin conditions without the common realizability assumption, connects to distributionally robust policy learning by hedging against small data shifts, and supports safe policy improvement by ensuring improvement over a baseline policy with high probability.
Similar Papers
Bounded-Abstention Pairwise Learning to Rank
Machine Learning (CS)
Helps computers ask for help when unsure.
When In Doubt, Abstain: The Impact of Abstention on Strategic Classification
Machine Learning (CS)
Stops people from tricking computer decisions.
Learning When Not to Learn: Risk-Sensitive Abstention in Bandits with Unbounded Rewards
Machine Learning (CS)
Keeps AI from making big mistakes.