The Ideological Turing Test for Moderation of Outgroup Affective Animosity
By: David Gamba, Daniel M. Romero, Grant Schoenebeck
Rising animosity toward ideological opponents poses critical societal challenges. We introduce and test the Ideological Turing Test, a gamified framework requiring participants to adopt and defend opposing viewpoints, to reduce affective animosity and affective polarization. We conducted a mixed-design experiment ($N = 203$) with four conditions: modality (debate/writing) x perspective-taking (Own/Opposite side). Participants engaged in structured interactions defending assigned positions, with outcomes judged by peers. We measured changes in affective animosity and ideological position immediately post-intervention and at 2-6 week follow-up. Perspective-taking reduced out-group animosity and ideological polarization. However, effects differed by modality (writing vs. debate) and over time. For affective animosity, writing from the opposite perspective yielded the largest immediate reduction ($Δ=+0.45$ SD), but the effect was not detectable at the 4-6 week follow-up. In contrast, the debate modality maintained a statistically significant reduction in animosity immediately after and at follow-up ($Δ=+0.37$ SD). For ideological position, adopting the opposite perspective led to significant immediate movement across modalities (writing: $Δ=+0.91$ SD; debate: $Δ=+0.51$ SD), and these changes persisted at follow-up. Judged performance (winning) did not moderate these effects, and willingness to re-participate was similar across conditions (~20-36%). These findings challenge assumptions about adversarial methods, revealing distinct temporal patterns: non-adversarial engagement fosters short-term empathy gains, while cognitive engagement through debate sustains affective benefits. The Ideological Turing Test demonstrates potential as a scalable tool for reducing polarization, particularly when combining perspective-taking with reflective adversarial interactions.
Similar Papers
What Contributes to Affective Polarization in Networked Online Environments? Evidence from an Agent-Based Model
Multiagent Systems
Shows how online sharing makes people dislike other groups more.
From Perceived Effectiveness to Measured Impact: Identity-Aware Evaluation of Automated Counter-Stereotypes
Computers and Society
Shows how to reduce gender bias online.
Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics
Computation and Language
AI's personality changes how it argues about right and wrong.