On the Security and Privacy of AI-based Mobile Health Chatbots
By: Samuel Wairimu, Leonardo Horn Iwaya
Potential Business Impact:
Makes health apps safer and more private.
The rise of Artificial Intelligence (AI) has impacted the development of mobile health (mHealth) apps, most notably with the advent of AI-based chatbots used as ubiquitous ``companions'' for various services, from fitness to mental health assistants. While these mHealth chatbots offer clear benefits, such as personalized health information and predictive diagnoses, they also raise significant concerns regarding security and privacy. This study empirically assesses 16 AI-based mHealth chatbots identified from the Google Play Store. The empirical assessment follows a three-phase approach (manual inspection, static code analysis, and dynamic analysis) to evaluate technical robustness and how design and implementation choices impact end users. Our findings revealed security vulnerabilities (e.g., enabling Remote WebView debugging), privacy issues, and non-compliance with Google Play policies (e.g., failure to provide publicly accessible privacy policies). Based on our findings, we offer several recommendations to enhance the security and privacy of mHealth chatbots. These recommendations focus on improving data handling processes, disclosure, and user security. Therefore, this work also seeks to support mHealth developers and security/privacy engineers in designing more transparent, privacy-friendly, and secure mHealth chatbots.
Similar Papers
Artificial Empathy: AI based Mental Health
Other Quantitative Biology
AI chatbots offer comfort but need better safety.
Prevalence of Security and Privacy Risk-Inducing Usage of AI-based Conversational Agents
Cryptography and Security
AI chatbots can accidentally share secrets.
"I know it's not right, but that's what it said to do": Investigating Trust in AI Chatbots for Cybersecurity Policy
Human-Computer Interaction
Tricks people into trusting bad AI advice.