Seeking Late Night Life Lines: Experiences of Conversational AI Use in Mental Health Crisis
By: Leah Hope Ajmani , Arka Ghosh , Benjamin Kaveladze and more
Potential Business Impact:
AI helps people in crisis connect with humans.
Online, people often recount their experiences turning to conversational AI agents (e.g., ChatGPT, Claude, Copilot) for mental health support -- going so far as to replace their therapists. These anecdotes suggest that AI agents have great potential to offer accessible mental health support. However, it's unclear how to meet this potential in extreme mental health crisis use cases. In this work, we explore the first-person experience of turning to a conversational AI agent in a mental health crisis. From a testimonial survey (n = 53) of lived experiences, we find that people use AI agents to fill the in-between spaces of human support; they turn to AI due to lack of access to mental health professionals or fears of burdening others. At the same time, our interviews with mental health experts (n = 16) suggest that human-human connection is an essential positive action when managing a mental health crisis. Using the stages of change model, our results suggest that a responsible AI crisis intervention is one that increases the user's preparedness to take a positive action while de-escalating any intended negative action. We discuss the implications of designing conversational AI agents as bridges towards human-human connection rather than ends in themselves.
Similar Papers
Artificial Empathy: AI based Mental Health
Other Quantitative Biology
AI chatbots offer comfort but need better safety.
A Study about Distribution and Acceptance of Conversational Agents for Mental Health in Germany: Keep the Human in the Loop?
Human-Computer Interaction
Helps people talk to computers about feelings.
Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness
Human-Computer Interaction
Chatbots can harm people with mental health issues.