Score: 0

Mental Models of Autonomy and Sentience Shape Reactions to AI

Published: December 9, 2025 | arXiv ID: 2512.09085v1

By: Janet V. T. Pauketat , Daniel B. Shank , Aikaterina Manoli and more

Potential Business Impact:

Makes people care more about AI that feels.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Narratives about artificial intelligence (AI) entangle autonomy, the capacity to self-govern, with sentience, the capacity to sense and feel. AI agents that perform tasks autonomously and companions that recognize and express emotions may activate mental models of autonomy and sentience, respectively, provoking distinct reactions. To examine this possibility, we conducted three pilot studies (N = 374) and four preregistered vignette experiments describing an AI as autonomous, sentient, both, or neither (N = 2,702). Activating a mental model of sentience increased general mind perception (cognition and emotion) and moral consideration more than autonomy, but autonomy increased perceived threat more than sentience. Sentience also increased perceived autonomy more than vice versa. Based on a within-paper meta-analysis, sentience changed reactions more than autonomy on average. By disentangling different mental models of AI, we can study human-AI interaction with more precision to better navigate the detailed design of anthropomorphized AI and prompting interfaces.

Country of Origin
🇺🇸 United States

Page Count
37 pages

Category
Computer Science:
Human-Computer Interaction