Mutual Wanting in Human--AI Interaction: Empirical Evidence from Large-Scale Analysis of GPT Model Transitions
By: HaoYang Shang, Xuan Liu
Potential Business Impact:
AI learns what users want to build better AI.
The rapid evolution of large language models (LLMs) creates complex bidirectional expectations between users and AI systems that are poorly understood. We introduce the concept of "mutual wanting" to analyze these expectations during major model transitions. Through analysis of user comments from major AI forums and controlled experiments across multiple OpenAI models, we provide the first large-scale empirical validation of bidirectional desire dynamics in human-AI interaction. Our findings reveal that nearly half of users employ anthropomorphic language, trust significantly exceeds betrayal language, and users cluster into distinct "mutual wanting" types. We identify measurable expectation violation patterns and quantify the expectation-reality gap following major model releases. Using advanced NLP techniques including dual-algorithm topic modeling and multi-dimensional feature extraction, we develop the Mutual Wanting Alignment Framework (M-WAF) with practical applications for proactive user experience management and AI system design. These findings establish mutual wanting as a measurable phenomenon with clear implications for building more trustworthy and relationally-aware AI systems.
Similar Papers
From Digital Distrust to Codified Honesty: Experimental Evidence on Generative AI in Credence Goods Markets
General Economics
AI experts earn more, but hurt customers.
Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance
Human-Computer Interaction
People trust computers more when they don't trust people.
Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English
Computation and Language
Helps people talk more like computers do.