Counterfactual Harm: A Counter-argument
By: Amit N. Sawant, Mats J. Stensrud
As AI systems are increasingly used to guide decisions, it is essential that they follow ethical principles. A core principle in medicine is non-maleficence, often equated with ``do no harm''. A formal definition of harm based on counterfactual reasoning has been proposed and popularized. This notion of harm has been promoted in simple settings with binary treatments and outcomes. Here, we highlight a problem with this definition in settings involving multiple treatment options. Illustrated by an example with three tuberculosis treatments (say, A, B, and C), we demonstrate that the counterfactual definition of harm can produce intransitive results: B is less harmful than A, C is less harmful than B, yet C is more harmful than A when compared pairwise. This intransitivity poses a challenge as it may lead to practical (clinical) decisions that are difficult to justify or defend. In contrast, an interventionist definition of harm based on expected utility forgoes counterfactual comparisons and ensures transitive treatment rankings.
Similar Papers
When is using AI the rational choice? The importance of counterfactuals in AI deployment decisions
Computers and Society
Helps decide when to use smart computer helpers.
Statistical Decision Theory with Counterfactual Loss
Statistics Theory
Helps doctors pick best treatments by seeing all results.
What is Harm? Baby Don't Hurt Me! On the Impossibility of Complete Harm Specification in AI Alignment
Artificial Intelligence
AI can't perfectly know all bad outcomes.