Limits of trust in medical AI
By: Joshua Hatherley
Potential Business Impact:
AI can help doctors, but patients might not trust it.
Artificial intelligence (AI) is expected to revolutionize the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in a variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI's progress in medicine, however, has led to concerns regarding the potential effects of this technology upon relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied upon, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely upon AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
Similar Papers
Not someone, but something: Rethinking trust in the age of medical AI
Computers and Society
Builds trust in AI doctors to help patients.
The promise and perils of AI in medicine
Computers and Society
Helps doctors find diseases and improve hospitals.
Data over dialogue: Why artificial intelligence is unlikely to humanise medicine
Computers and Society
AI might make doctors less caring and trustworthy.