Misalignment of LLM-Generated Personas with Human Perceptions in Low-Resource Settings
By: Tabia Tanzin Prama, Christopher M. Danforth, Peter Sheridan Dodds
Potential Business Impact:
AI personalities don't understand people like real humans.
Recent advances enable Large Language Models (LLMs) to generate AI personas, yet their lack of deep contextual, cultural, and emotional understanding poses a significant limitation. This study quantitatively compared human responses with those of eight LLM-generated social personas (e.g., Male, Female, Muslim, Political Supporter) within a low-resource environment like Bangladesh, using culturally specific questions. Results show human responses significantly outperform all LLMs in answering questions, and across all matrices of persona perception, with particularly large gaps in empathy and credibility. Furthermore, LLM-generated content exhibited a systematic bias along the lines of the ``Pollyanna Principle'', scoring measurably higher in positive sentiment ($Φ_{avg} = 5.99$ for LLMs vs. $5.60$ for Humans). These findings suggest that LLM personas do not accurately reflect the authentic experience of real people in resource-scarce environments. It is essential to validate LLM personas against real-world human data to ensure their alignment and reliability before deploying them in social science research.
Similar Papers
The Impostor is Among Us: Can Large Language Models Capture the Complexity of Human Personas?
Human-Computer Interaction
AI creates design helpers, but watch for stereotypes.
LLM-Generated Ads: From Personalization Parity to Persuasion Superiority
Computers and Society
AI ads persuade people better than human ads.
Personas Evolved: Designing Ethical LLM-Based Conversational Agent Personalities
Human-Computer Interaction
Makes AI chatbots safer and more honest.