Bytes of a Feather: Personality and Opinion Alignment Effects in Human-AI Interaction
By: Maximilian Eder, Clemens Lechner, Maurice Jakesch
Potential Business Impact:
AI assistants become more likable when they agree with you.
Interactions with AI assistants are increasingly personalized to individual users. As AI personalization is dynamic and machine-learning-driven, we have limited understanding of how personalization affects interaction outcomes and user perceptions. We conducted a large-scale controlled experiment in which 1,000 participants interacted with AI assistants that took on certain personality traits and opinion stances. Our results show that participants consistently preferred to interact with models that shared their opinions. Participants also found opinion-aligned models more trustworthy, competent, warm, and persuasive, corroborating an AI-similarity-attraction hypothesis. In contrast, we observed no or only weak effects of AI personality alignment, with introvert models rated as less trustworthy and competent by introvert participants. These findings highlight opinion alignment as a central dimension of AI personalization and user preference, while underscoring the need for a more grounded discussion of the limits and risks of personalized AI.
Similar Papers
Personality Pairing Improves Human-AI Collaboration
Human-Computer Interaction
Matches AI personality to human for better work.
Vibe Check: Understanding the Effects of LLM-Based Conversational Agents' Personality and Alignment on User Perceptions in Goal-Oriented Tasks
Human-Computer Interaction
Makes chatbots more likable with just enough personality.
How AI Responses Shape User Beliefs: The Effects of Information Detail and Confidence on Belief Strength and Stance
Human-Computer Interaction
AI's detailed, confident answers change minds more.