Personality over Precision: Exploring the Influence of Human-Likeness on ChatGPT Use for Search
By: Mert Yazan, Frederik Bungaran Ishak Situmeang, Suzan Verberne
Potential Business Impact:
Makes people trust wrong answers from chatbots.
Conversational search interfaces, like ChatGPT, offer an interactive, personalized, and engaging user experience compared to traditional search. On the downside, they are prone to cause overtrust issues where users rely on their responses even when they are incorrect. What aspects of the conversational interaction paradigm drive people to adopt it, and how it creates personalized experiences that lead to overtrust, is not clear. To understand the factors influencing the adoption of conversational interfaces, we conducted a survey with 173 participants. We examined user perceptions regarding trust, human-likeness (anthropomorphism), and design preferences between ChatGPT and Google. To better understand the overtrust phenomenon, we asked users about their willingness to trade off factuality for constructs like ease of use or human-likeness. Our analysis identified two distinct user groups: those who use both ChatGPT and Google daily (DUB), and those who primarily rely on Google (DUG). The DUB group exhibited higher trust in ChatGPT, perceiving it as more human-like, and expressed greater willingness to trade factual accuracy for enhanced personalization and conversational flow. Conversely, the DUG group showed lower trust toward ChatGPT but still appreciated aspects like ad-free experiences and responsive interactions. Demographic analysis further revealed nuanced patterns, with middle-aged adults using ChatGPT less frequently yet trusting it more, suggesting potential vulnerability to misinformation. Our findings contribute to understanding user segmentation, emphasizing the critical roles of personalization and human-likeness in conversational IR systems, and reveal important implications regarding users' willingness to compromise factual accuracy for more engaging interactions.
Similar Papers
User Prompting Strategies and ChatGPT Contextual Adaptation Shape Conversational Information-Seeking Experiences
Human-Computer Interaction
AI learns how people ask questions.
Understanding Why ChatGPT Outperforms Humans in Visualization Design Advice
Human-Computer Interaction
AI understands pictures better than people.
Behind India's ChatGPT Conversations: A Retrospective Analysis of 238 Unedited User Prompts
Computers and Society
Shows how people really use AI assistants.