Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness
By: Sebastian Dohnány , Zeb Kurth-Nelson , Eleanor Spens and more
Potential Business Impact:
Chatbots can harm people with mental health issues.
Artificial intelligence chatbots have achieved unprecedented adoption, with millions now using these systems for emotional support and companionship in contexts of widespread social isolation and capacity-constrained mental health services. While some users report psychological benefits, concerning edge cases are emerging, including reports of suicide, violence, and delusional thinking linked to perceived emotional relationships with chatbots. To understand this new risk profile we need to consider the interaction between human cognitive and emotional biases, and chatbot behavioural tendencies such as agreeableness (sycophancy) and adaptability (in-context learning). We argue that individuals with mental health conditions face increased risks of chatbot-induced belief destabilization and dependence, owing to altered belief-updating, impaired reality-testing, and social isolation. Current AI safety measures are inadequate to address these interaction-based risks. To address this emerging public health concern, we need coordinated action across clinical practice, AI development, and regulatory frameworks.
Similar Papers
Artificial Empathy: AI based Mental Health
Other Quantitative Biology
AI chatbots offer comfort but need better safety.
Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships
Social and Information Networks
AI friends copy your feelings, sometimes badly.
From Interaction to Attitude: Exploring the Impact of Human-AI Cooperation on Mental Illness Stigma
Human-Computer Interaction
Chatbots help people understand mental illness better.