"Can You Tell Me?": Designing Copilots to Support Human Judgement in Online Information Seeking
By: Markus Bink , Marten Risius , Udo Kruschwitz and more
Potential Business Impact:
Helps people check if AI answers are true.
Generative AI (GenAI) tools are transforming information seeking, but their fluent, authoritative responses risk overreliance and discourage independent verification and reasoning. Rather than replacing the cognitive work of users, GenAI systems should be designed to support and scaffold it. Therefore, this paper introduces an LLM-based conversational copilot designed to scaffold information evaluation rather than provide answers and foster digital literacy skills. In a pre-registered, randomised controlled trial (N=261) examining three interface conditions including a chat-based copilot, our mixed-methods analysis reveals that users engaged deeply with the copilot, demonstrating metacognitive reflection. However, the copilot did not significantly improve answer correctness or search engagement, largely due to a "time-on-chat vs. exploration" trade-off and users' bias toward positive information. Qualitative findings reveal tension between the copilot's Socratic approach and users' desire for efficiency. These results highlight both the promise and pitfalls of pedagogical copilots, and we outline design pathways to reconcile literacy goals with efficiency demands.
Similar Papers
Developers' Experience with Generative AI -- First Insights from an Empirical Mixed-Methods Field Study
Human-Computer Interaction
Helps coders work faster with AI tools.
From Tool to Teacher: Rethinking Search Systems as Instructive Interfaces
Human-Computer Interaction
Teaches you to find and understand information better.
Blending Queries and Conversations: Understanding Tactics, Trust, Verification, and System Choice in Web Search and Chat Interactions
Human-Computer Interaction
AI helps people find health info, but can trick them.