Score: 1

Balancing Safety and Helpfulness in Healthcare AI Assistants through Iterative Preference Alignment

Published: December 3, 2025 | arXiv ID: 2512.04210v1

By: Huy Nghiem , Swetasudha Panda , Devashish Khatwani and more

Potential Business Impact:

Makes AI doctors safer by catching bad advice.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are increasingly used in healthcare, yet ensuring their safety and trustworthiness remains a barrier to deployment. Conversational medical assistants must avoid unsafe compliance without over-refusing benign queries. We present an iterative post-deployment alignment framework that applies Kahneman-Tversky Optimization (KTO) and Direct Preference Optimization (DPO) to refine models against domain-specific safety signals. Using the CARES-18K benchmark for adversarial robustness, we evaluate four LLMs (Llama-3B/8B, Meditron-8B, Mistral-7B) across multiple cycles. Our results show up to 42% improvement in safety-related metrics for harmful query detection, alongside interesting trade-offs against erroneous refusals, thereby exposing architecture-dependent calibration biases. We also perform ablation studies to identify when self-evaluation is reliable and when external or finetuned judges are necessary to maximize performance gains. Our findings underscore the importance of adopting best practices that balance patient safety, user trust, and clinical utility in the design of conversational medical assistants.

Country of Origin
🇺🇸 United States


Page Count
36 pages

Category
Computer Science:
Artificial Intelligence