Balancing Safety and Helpfulness in Healthcare AI Assistants through Iterative Preference Alignment
By: Huy Nghiem , Swetasudha Panda , Devashish Khatwani and more
Potential Business Impact:
Makes AI doctors safer by catching bad advice.
Large Language Models (LLMs) are increasingly used in healthcare, yet ensuring their safety and trustworthiness remains a barrier to deployment. Conversational medical assistants must avoid unsafe compliance without over-refusing benign queries. We present an iterative post-deployment alignment framework that applies Kahneman-Tversky Optimization (KTO) and Direct Preference Optimization (DPO) to refine models against domain-specific safety signals. Using the CARES-18K benchmark for adversarial robustness, we evaluate four LLMs (Llama-3B/8B, Meditron-8B, Mistral-7B) across multiple cycles. Our results show up to 42% improvement in safety-related metrics for harmful query detection, alongside interesting trade-offs against erroneous refusals, thereby exposing architecture-dependent calibration biases. We also perform ablation studies to identify when self-evaluation is reliable and when external or finetuned judges are necessary to maximize performance gains. Our findings underscore the importance of adopting best practices that balance patient safety, user trust, and clinical utility in the design of conversational medical assistants.
Similar Papers
Evaluating the Clinical Safety of LLMs in Response to High-Risk Mental Health Disclosures
Computers and Society
AI helps people in mental health crises.
DialogGuard: Multi-Agent Psychosocial Safety Evaluation of Sensitive LLM Responses
Artificial Intelligence
Tests AI for safe and helpful online chats.
Enabling Doctor-Centric Medical AI with LLMs through Workflow-Aligned Tasks and Benchmarks
Computation and Language
Helps doctors use AI for better patient care.