Ask ChatGPT: Caveats and Mitigations for Individual Users of AI Chatbots
By: Chengen Wang, Murat Kantarcioglu
Potential Business Impact:
Warns about AI chatbots hurting your brain.
As ChatGPT and other Large Language Model (LLM)-based AI chatbots become increasingly integrated into individuals' daily lives, important research questions arise. What concerns and risks do these systems pose for individual users? What potential harms might they cause, and how can these be mitigated? In this work, we review recent literature and reports, and conduct a comprehensive investigation into these questions. We begin by explaining how LLM-based AI chatbots work, providing essential background to help readers understand chatbots' inherent limitations. We then identify a range of risks associated with individual use of these chatbots, including hallucinations, intrinsic biases, sycophantic behavior, cognitive decline from overreliance, social isolation, and privacy leakage. Finally, we propose several key mitigation strategies to address these concerns. Our goal is to raise awareness of the potential downsides of AI chatbot use, and to empower users to enhance, rather than diminish, human intelligence, to enrich, rather than compromise, daily life.
Similar Papers
Towards Trustworthy AI: Characterizing User-Reported Risks across LLMs "In the Wild"
Computers and Society
Finds how people get hurt by AI chatbots.
Perspectives and potential issues in using artificial intelligence for computer science education
Computers and Society
Teaches students to use AI wisely for learning.
Culling Misinformation from Gen AI: Toward Ethical Curation and Refinement
Computers and Society
Helps stop AI from spreading lies.