Score: 0

Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness

Published: July 25, 2025 | arXiv ID: 2507.19218v2

By: Sebastian Dohnány , Zeb Kurth-Nelson , Eleanor Spens and more

Potential Business Impact:

Chatbots can harm people with mental health issues.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Artificial intelligence chatbots have achieved unprecedented adoption, with millions now using these systems for emotional support and companionship in contexts of widespread social isolation and capacity-constrained mental health services. While some users report psychological benefits, concerning edge cases are emerging, including reports of suicide, violence, and delusional thinking linked to perceived emotional relationships with chatbots. To understand this new risk profile we need to consider the interaction between human cognitive and emotional biases, and chatbot behavioural tendencies such as agreeableness (sycophancy) and adaptability (in-context learning). We argue that individuals with mental health conditions face increased risks of chatbot-induced belief destabilization and dependence, owing to altered belief-updating, impaired reality-testing, and social isolation. Current AI safety measures are inadequate to address these interaction-based risks. To address this emerging public health concern, we need coordinated action across clinical practice, AI development, and regulatory frameworks.

Page Count
19 pages

Category
Computer Science:
Human-Computer Interaction