Explainability matters: The effect of liability rules on the healthcare sector
By: Jiawen Wei , Elena Verona , Andrea Bertolini and more
Potential Business Impact:
Makes AI doctors legally responsible for mistakes.
Explainability, the capability of an artificial intelligence system (AIS) to explain its outcomes in a manner that is comprehensible to human beings at an acceptable level, has been deemed essential for critical sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, ``Oracle'' (without explainability) versus ``AI Colleague'' (with explainability) for a thorough analysis. We discuss how the level of automation and explainability of AIS can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved parties and mitigate the risk of potential defensive medicine practices.
Similar Papers
Legally-Informed Explainable AI
Human-Computer Interaction
Helps AI explain decisions to protect people legally.
Accountability Framework for Healthcare AI Systems: Towards Joint Accountability in Decision Making
Artificial Intelligence
Makes AI in medicine fair and clear.
Understanding the Impact of Physicians' Legal Considerations on XAI Systems
Human-Computer Interaction
Helps doctors trust AI by showing how it works.