Comparing Self-Disclosure Themes and Semantics to a Human, a Robot, and a Disembodied Agent
By: Sophie Chiang , Guy Laban , Emily S. Cross and more
Potential Business Impact:
People talk the same to robots and humans.
As social robots and other artificial agents become more conversationally capable, it is important to understand whether the content and meaning of self-disclosure towards these agents changes depending on the agent's embodiment. In this study, we analysed conversational data from three controlled experiments in which participants self-disclosed to a human, a humanoid social robot, and a disembodied conversational agent. Using sentence embeddings and clustering, we identified themes in participants' disclosures, which were then labelled and explained by a large language model. We subsequently assessed whether these themes and the underlying semantic structure of the disclosures varied by agent embodiment. Our findings reveal strong consistency: thematic distributions did not significantly differ across embodiments, and semantic similarity analyses showed that disclosures were expressed in highly comparable ways. These results suggest that while embodiment may influence human behaviour in human-robot and human-agent interactions, people tend to maintain a consistent thematic focus and semantic structure in their disclosures, whether speaking to humans or artificial interlocutors.
Similar Papers
A Multimodal Neural Network for Recognizing Subjective Self-Disclosure Towards Social Robots
Robotics
Robots learn to understand when people share personal things.
Examining the Utility of Self-disclosure Types for Modeling Annotators of Social Norms
Computation and Language
Predicts how people judge right from wrong.
A Robot That Listens: Enhancing Self-Disclosure and Engagement Through Sentiment-based Backchannels and Active Listening
Human-Computer Interaction
Robot listens better, makes people share more.