Understanding the Impact of Physicians' Legal Considerations on XAI Systems
By: Gennie Mansi, Mark Riedl
Potential Business Impact:
Helps doctors trust AI by showing how it works.
Physicians are--and feel--ethically, professionally, and legally responsible for patient outcomes, buffering patients from harmful AI determinations from medical AI systems. Many have called for explainable AI (XAI) systems to help physicians incorporate medical AI recommendations into their workflows in a way that reduces the potential of harms to patients. While prior work has demonstrated how physicians' legal concerns impact their medical decision making, little work has explored how XAI systems should be designed in light of these concerns. In this study, we conducted interviews with 10 physicians to understand where and how they anticipate errors that may occur with a medical AI system and how these anticipated errors connect to their legal concerns. In our study, physicians anticipated risks associated with using an AI system for patient care, but voiced unknowns around how their legal risk mitigation strategies may change given a new technical system. Based on these findings, we describe the implications for designing XAI systems that can address physicians' legal concerns. Specifically, we identify the need to provide AI recommendations alongside contextual information that guides their risk mitigation strategies, including how non-legally related aspects of their systems, such as medical documentation and auditing requests, might be incorporated into a legal case.
Similar Papers
Legally-Informed Explainable AI
Human-Computer Interaction
Helps AI explain decisions to protect people legally.
When AI Writes Back: Ethical Considerations by Physicians on AI-Drafted Patient Message Replies
Computers and Society
Helps doctors answer patient messages faster.
Explainability matters: The effect of liability rules on the healthcare sector
Computers and Society
Makes AI doctors legally responsible for mistakes.