Score: 0

Bytes of a Feather: Personality and Opinion Alignment Effects in Human-AI Interaction

Published: November 13, 2025 | arXiv ID: 2511.10544v1

By: Maximilian Eder, Clemens Lechner, Maurice Jakesch

Potential Business Impact:

AI assistants become more likable when they agree with you.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Interactions with AI assistants are increasingly personalized to individual users. As AI personalization is dynamic and machine-learning-driven, we have limited understanding of how personalization affects interaction outcomes and user perceptions. We conducted a large-scale controlled experiment in which 1,000 participants interacted with AI assistants that took on certain personality traits and opinion stances. Our results show that participants consistently preferred to interact with models that shared their opinions. Participants also found opinion-aligned models more trustworthy, competent, warm, and persuasive, corroborating an AI-similarity-attraction hypothesis. In contrast, we observed no or only weak effects of AI personality alignment, with introvert models rated as less trustworthy and competent by introvert participants. These findings highlight opinion alignment as a central dimension of AI personalization and user preference, while underscoring the need for a more grounded discussion of the limits and risks of personalized AI.

Country of Origin
🇩🇪 Germany

Page Count
23 pages

Category
Computer Science:
Human-Computer Interaction