AI as a deliberative partner fosters intercultural empathy for Americans but fails for Latin American participants
By: Isabel Villanueva , Tara Bobinac , Binwei Yao and more
Potential Business Impact:
Helps Americans understand other cultures better.
Despite increasing AI chatbot deployment in public discourse, empirical evidence on their capacity to foster intercultural empathy remains limited. Through a randomized experiment, we assessed how different AI deliberation approaches--cross-cultural deliberation (presenting other-culture perspectives), own-culture deliberation (representing participants' own culture), and non-deliberative control--affect intercultural empathy across American and Latin American participants. Cross-cultural deliberation increased intercultural empathy among American participants through positive emotional engagement, but produced no such effects for Latin American participants, who perceived AI responses as culturally inauthentic despite explicit prompting to represent their cultural perspectives. Our analysis of participant-driven feedback, where users directly flagged and explained culturally inappropriate AI responses, revealed systematic gaps in AI's representation of Latin American contexts that persist despite sophisticated prompt engineering. These findings demonstrate that current approaches to AI cultural alignment--including linguistic adaptation and explicit cultural prompting--cannot fully address deeper representational asymmetries in AI systems. Our work advances both deliberation theory and AI alignment research by revealing how the same AI system can simultaneously promote intercultural understanding for one cultural group while failing for another, with critical implications for designing equitable AI systems for cross-cultural democratic discourse.
Similar Papers
Advancing Equitable AI: Evaluating Cultural Expressiveness in LLMs for Latin American Contexts
Social and Information Networks
Makes AI understand Latin America better.
Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide
Computers and Society
People trust humans more than AI for discussions.
Based AI improves human decision-making but reduces trust
Human-Computer Interaction
Biased AI helps people think better, but they trust it less.