Mental Models of Autonomy and Sentience Shape Reactions to AI
By: Janet V. T. Pauketat , Daniel B. Shank , Aikaterina Manoli and more
Potential Business Impact:
Makes people care more about AI that feels.
Narratives about artificial intelligence (AI) entangle autonomy, the capacity to self-govern, with sentience, the capacity to sense and feel. AI agents that perform tasks autonomously and companions that recognize and express emotions may activate mental models of autonomy and sentience, respectively, provoking distinct reactions. To examine this possibility, we conducted three pilot studies (N = 374) and four preregistered vignette experiments describing an AI as autonomous, sentient, both, or neither (N = 2,702). Activating a mental model of sentience increased general mind perception (cognition and emotion) and moral consideration more than autonomy, but autonomy increased perceived threat more than sentience. Sentience also increased perceived autonomy more than vice versa. Based on a within-paper meta-analysis, sentience changed reactions more than autonomy on average. By disentangling different mental models of AI, we can study human-AI interaction with more precision to better navigate the detailed design of anthropomorphized AI and prompting interfaces.
Similar Papers
Bridging the Gap: Toward Cognitive Autonomy in Artificial Intelligence
Artificial Intelligence
AI learns to fix its own mistakes.
Artificial Intelligence (AI) and the Relationship between Agency, Autonomy, and Moral Patiency
Computers and Society
AI can't think for itself, but might learn ethics.
Human Autonomy and Sense of Agency in Human-Robot Interaction: A Systematic Literature Review
Human-Computer Interaction
Helps robots respect your choices and feelings.