LLM Use for Mental Health: Crowdsourcing Users' Sentiment-based Perspectives and Values from Social Discussions
By: Lingyao Li , Xiaoshan Huang , Renkai Ma and more
Potential Business Impact:
Helps chatbots give better mental health advice.
Large language models (LLMs) chatbots like ChatGPT are increasingly used for mental health support. They offer accessible, therapeutic support but also raise concerns about misinformation, over-reliance, and risks in high-stakes contexts of mental health. We crowdsource large-scale users' posts from six major social media platforms to examine how people discuss their interactions with LLM chatbots across different mental health conditions. Through an LLM-assisted pipeline grounded in Value-Sensitive Design (VSD), we mapped the relationships across user-reported sentiments, mental health conditions, perspectives, and values. Our results reveal that the use of LLM chatbots is condition-specific. Users with neurodivergent conditions (e.g., ADHD, ASD) report strong positive sentiments and instrumental or appraisal support, whereas higher-risk disorders (e.g., schizophrenia, bipolar disorder) show more negative sentiments. We further uncover how user perspectives co-occur with underlying values, such as identity, autonomy, and privacy. Finally, we discuss shifting from "one-size-fits-all" chatbot design toward condition-specific, value-sensitive LLM design.
Similar Papers
"It Listens Better Than My Therapist": Exploring Social Media Discourse on LLMs as Mental Health Tool
Computation and Language
Helps people feel better with AI chats.
Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs
Computation and Language
Helps chatbots spot and help people in mental crisis.
Can LLMs Address Mental Health Questions? A Comparison with Human Therapists
Human-Computer Interaction
AI chatbots give better mental health advice than people.