Users Mispredict Their Own Preferences for AI Writing Assistance
By: Vivian Lai , Zana Buçinca , Nil-Jana Akpinar and more
Potential Business Impact:
AI helps you write by guessing what you need.
Proactive AI writing assistants need to predict when users want drafting help, yet we lack empirical understanding of what drives preferences. Through a factorial vignette study with 50 participants making 750 pairwise comparisons, we find compositional effort dominates decisions ($ρ= 0.597$) while urgency shows no predictive power ($ρ\approx 0$). More critically, users exhibit a striking perception-behavior gap: they rank urgency first in self-reports despite it being the weakest behavioral driver, representing a complete preference inversion. This misalignment has measurable consequences. Systems designed from users' stated preferences achieve only 57.7\% accuracy, underperforming even naive baselines, while systems using behavioral patterns reach significantly higher 61.3\% ($p < 0.05$). These findings demonstrate that relying on user introspection for system design actively misleads optimization, with direct implications for proactive natural language generation (NLG) systems.
Similar Papers
Beyond Mimicry: Preference Coherence in LLMs
Artificial Intelligence
AI doesn't always make smart choices when faced with tough decisions.
"Pragmatic Tools or Empowering Friends?" Discovering and Co-Designing Personality-Aligned AI Writing Companions
Human-Computer Interaction
Makes AI writing tools fit your personality.
Beyond Correctness: Evaluating Subjective Writing Preferences Across Cultures
Computation and Language
Helps computers judge writing quality better.