Can LLMs Address Mental Health Questions? A Comparison with Human Therapists
By: Synthia Wang , Yuwei Cheng , Austin Song and more
Potential Business Impact:
AI chatbots give better mental health advice than people.
Limited access to mental health care has motivated the use of digital tools and conversational agents powered by large language models (LLMs), yet their quality and reception remain unclear. We present a study comparing therapist-written responses to those generated by ChatGPT, Gemini, and Llama for real patient questions. Text analysis showed that LLMs produced longer, more readable, and lexically richer responses with a more positive tone, while therapist responses were more often written in the first person. In a survey with 150 users and 23 licensed therapists, participants rated LLM responses as clearer, more respectful, and more supportive than therapist-written answers. Yet, both groups of participants expressed a stronger preference for human therapist support. These findings highlight the promise and limitations of LLMs in mental health, underscoring the need for designs that balance their communicative strengths with concerns of trust, privacy, and accountability.
Similar Papers
"It Listens Better Than My Therapist": Exploring Social Media Discourse on LLMs as Mental Health Tool
Computation and Language
Helps people feel better with AI chats.
Evaluating the Clinical Safety of LLMs in Response to High-Risk Mental Health Disclosures
Computers and Society
AI helps people in mental health crises.
Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs
Computation and Language
Helps chatbots spot and help people in mental crisis.