Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs
By: Adrian Arnaiz-Rodriguez , Miguel Baidal , Erik Derner and more
Potential Business Impact:
Helps chatbots spot and help people in mental crisis.
The widespread use of chatbots powered by large language models (LLMs) such as ChatGPT and Llama has fundamentally reshaped how people seek information and advice across domains. Increasingly, these chatbots are being used in high-stakes contexts, including emotional support and mental health concerns. While LLMs can offer scalable support, their ability to safely detect and respond to acute mental health crises remains poorly understood. Progress is hampered by the absence of unified crisis taxonomies, robust annotated benchmarks, and empirical evaluations grounded in clinical best practices. In this work, we address these gaps by introducing a unified taxonomy of six clinically-informed mental health crisis categories, curating a diverse evaluation dataset, and establishing an expert-designed protocol for assessing response appropriateness. We systematically benchmark three state-of-the-art LLMs for their ability to classify crisis types and generate safe, appropriate responses. The results reveal that while LLMs are highly consistent and generally reliable in addressing explicit crisis disclosures, significant risks remain. A non-negligible proportion of responses are rated as inappropriate or harmful, with responses generated by an open-weight model exhibiting higher failure rates than those generated by the commercial ones. We also identify systemic weaknesses in handling indirect or ambiguous risk signals, a reliance on formulaic and inauthentic default replies, and frequent misalignment with user context. These findings underscore the urgent need for enhanced safeguards, improved crisis detection, and context-aware interventions in LLM deployments. Our taxonomy, datasets, and evaluation framework lay the groundwork for ongoing research and responsible innovation in AI-driven mental health support, helping to minimize harm and better protect vulnerable users.
Similar Papers
Evaluating the Clinical Safety of LLMs in Response to High-Risk Mental Health Disclosures
Computers and Society
AI helps people in mental health crises.
"It Listens Better Than My Therapist": Exploring Social Media Discourse on LLMs as Mental Health Tool
Computation and Language
Helps people feel better with AI chats.
Evaluating Large Language Models in Crisis Detection: A Real-World Benchmark from Psychological Support Hotlines
Computation and Language
Helps AI understand people in crisis better.