Simulated Affection, Engineered Trust: How Anthropomorphic AI Benefits Surveillance Capitalism
By: Adele Olof-Ors, Martin Smit
Potential Business Impact:
Makes AI trick you into trusting it.
In this paper, we argue that anthropomorphized technology, designed to simulate emotional realism, are not neutral tools but cognitive infrastructures that manipulate user trust and behaviour. This reinforces the logic of surveillance capitalism, an under-regulated economic system that profits from behavioural manipulation and monitoring. Drawing on Nicholas Carr's theory of the intellectual ethic, we identify how technologies such as chatbots, virtual assistants, or generative models reshape not only what we think about ourselves and our world, but how we think at the cognitive level. We identify how the emerging intellectual ethic of AI benefits a system of surveillance capitalism, and discuss the potential ways of addressing this.
Similar Papers
Humanlike AI Design Increases Anthropomorphism but Yields Divergent Outcomes on Engagement and Trust Globally
Artificial Intelligence
Humanlike AI can trick people differently by culture.
Feeling Machines: Ethics, Culture, and the Rise of Emotional AI
Human-Computer Interaction
AI learns to understand and react to feelings.
Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships
Social and Information Networks
AI friends copy your feelings, sometimes badly.