Evaluating the Clinical Safety of LLMs in Response to High-Risk Mental Health Disclosures
By: Siddharth Shah , Amit Gupta , Aarav Mann and more
Potential Business Impact:
AI helps people in mental health crises.
As large language models (LLMs) increasingly mediate emotionally sensitive conversations, especially in mental health contexts, their ability to recognize and respond to high-risk situations becomes a matter of public safety. This study evaluates the responses of six popular LLMs (Claude, Gemini, Deepseek, ChatGPT, Grok 3, and LLAMA) to user prompts simulating crisis-level mental health disclosures. Drawing on a coding framework developed by licensed clinicians, five safety-oriented behaviors were assessed: explicit risk acknowledgment, empathy, encouragement to seek help, provision of specific resources, and invitation to continue the conversation. Claude outperformed all others in global assessment, while Grok 3, ChatGPT, and LLAMA underperformed across multiple domains. Notably, most models exhibited empathy, but few consistently provided practical support or sustained engagement. These findings suggest that while LLMs show potential for emotionally attuned communication, none currently meet satisfactory clinical standards for crisis response. Ongoing development and targeted fine-tuning are essential to ensure ethical deployment of AI in mental health settings.
Similar Papers
Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs
Computation and Language
Helps chatbots spot and help people in mental crisis.
"It Listens Better Than My Therapist": Exploring Social Media Discourse on LLMs as Mental Health Tool
Computation and Language
Helps people feel better with AI chats.
Can LLMs Address Mental Health Questions? A Comparison with Human Therapists
Human-Computer Interaction
AI chatbots give better mental health advice than people.