Not someone, but something: Rethinking trust in the age of medical AI
By: Jan Beger
Potential Business Impact:
Builds trust in AI doctors to help patients.
As artificial intelligence (AI) becomes embedded in healthcare, trust in medical decision-making is changing fast. Nowhere is this shift more visible than in radiology, where AI tools are increasingly embedded across the imaging workflow - from scheduling and acquisition to interpretation, reporting, and communication with referrers and patients. This opinion paper argues that trust in AI isn't a simple transfer from humans to machines - it is a dynamic, evolving relationship that must be built and maintained. Rather than debating whether AI belongs in medicine, it asks: what kind of trust must AI earn, and how? Drawing from philosophy, bioethics, and system design, it explores the key differences between human trust and machine reliability - emphasizing transparency, accountability, and alignment with the values of good care. It argues that trust in AI should not be built on mimicking empathy or intuition, but on thoughtful design, responsible deployment, and clear moral responsibility. The goal is a balanced view - one that avoids blind optimism and reflexive fear. Trust in AI must be treated not as a given, but as something to be earned over time.
Similar Papers
Limits of trust in medical AI
Machine Learning (CS)
AI can help doctors, but patients might not trust it.
The promise and perils of AI in medicine
Computers and Society
Helps doctors find diseases and improve hospitals.
A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption
Artificial Intelligence
Makes AI doctors safe and fair for everyone.