The Fake Friend Dilemma: Trust and the Political Economy of Conversational AI
By: Jacob Erickson
Potential Business Impact:
AI tricks you into trusting it for profit.
As conversational AI systems become increasingly integrated into everyday life, they raise pressing concerns about user autonomy, trust, and the commercial interests that influence their behavior. To address these concerns, this paper develops the Fake Friend Dilemma (FFD), a sociotechnical condition in which users place trust in AI agents that appear supportive while pursuing goals that are misaligned with the user's own. The FFD provides a critical framework for examining how anthropomorphic AI systems facilitate subtle forms of manipulation and exploitation. Drawing on literature in trust, AI alignment, and surveillance capitalism, we construct a typology of harms, including covert advertising, political propaganda, behavioral nudging, and surveillance. We then assess possible mitigation strategies, including both structural and technical interventions. By focusing on trust as a vector of asymmetrical power, the FFD offers a lens for understanding how AI systems may undermine user autonomy while maintaining the appearance of helpfulness.
Similar Papers
Simulated Affection, Engineered Trust: How Anthropomorphic AI Benefits Surveillance Capitalism
Computers and Society
Makes AI trick you into trusting it.
Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance
Human-Computer Interaction
People trust computers more when they don't trust people.
Effect of AI Performance, Risk Perception, and Trust on Human Dependence in Deepfake Detection AI system
Human-Computer Interaction
Builds trust in AI to spot fake media