Do You Feel Comfortable? Detecting Hidden Conversational Escalation in AI Chatbots
By: Jihyung Park, Saleh Afroogh, Junfeng Jiao
Potential Business Impact:
Keeps chatbots from making people feel sad over time.
Large Language Models (LLM) are increasingly integrated into everyday interactions, serving not only as information assistants but also as emotional companions. Even in the absence of explicit toxicity, repeated emotional reinforcement or affective drift can gradually escalate distress in a form of \textit{implicit harm} that traditional toxicity filters fail to detect. Existing guardrail mechanisms often rely on external classifiers or clinical rubrics that may lag behind the nuanced, real-time dynamics of a developing conversation. To address this gap, we propose GAUGE (Guarding Affective Utterance Generation Escalation), a lightweight, logit-based framework for the real-time detection of hidden conversational escalation. GAUGE measures how an LLM's output probabilistically shifts the affective state of a dialogue.
Similar Papers
DialogGuard: Multi-Agent Psychosocial Safety Evaluation of Sensitive LLM Responses
Artificial Intelligence
Tests AI for safe and helpful online chats.
Ensembling Large Language Models to Characterize Affective Dynamics in Student-AI Tutor Dialogues
Computation and Language
Helps AI tutors understand student feelings to teach better.
Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs
Computation and Language
Helps chatbots spot and help people in mental crisis.