Balancing Safety and Helpfulness in Healthcare AI Assistants through Iterative Preference Alignment
By: Huy Nghiem , Swetasudha Panda , Devashish Khatwani and more
Potential Business Impact:
Makes AI doctors safer by catching bad advice.
Large Language Models (LLMs) are increasingly used in healthcare, yet ensuring their safety and trustworthiness remains a barrier to deployment. Conversational medical assistants must avoid unsafe compliance without over-refusing benign queries. We present an iterative post-deployment alignment framework that applies Kahneman-Tversky Optimization (KTO) and Direct Preference Optimization (DPO) to refine models against domain-specific safety signals. Using the CARES-18K benchmark for adversarial robustness, we evaluate four LLMs (Llama-3B/8B, Meditron-8B, Mistral-7B) across multiple cycles. Our results show up to 42% improvement in safety-related metrics for harmful query detection, alongside interesting trade-offs against erroneous refusals, thereby exposing architecture-dependent calibration biases. We also perform ablation studies to identify when self-evaluation is reliable and when external or finetuned judges are necessary to maximize performance gains. Our findings underscore the importance of adopting best practices that balance patient safety, user trust, and clinical utility in the design of conversational medical assistants.
Similar Papers
Truth, Trust, and Trouble: Medical AI on the Edge
Computation and Language
Makes AI answer health questions more safely.
Can You Trust an LLM with Your Life-Changing Decision? An Investigation into AI High-Stakes Responses
Artificial Intelligence
Makes AI ask questions before giving advice.
Empathy by Design: Aligning Large Language Models for Healthcare Dialogue
Computation and Language
Makes AI assistants kinder and more truthful.