Chatbots for Data Collection in Surveys: A Comparison of Four Theory-Based Interview Probes
By: Rune M. Jacobsen , Samuel Rhys Cox , Carla F. Griggio and more
Potential Business Impact:
Chatbots ask better questions in surveys.
Surveys are a widespread method for collecting data at scale, but their rigid structure often limits the depth of qualitative insights obtained. While interviews naturally yield richer responses, they are challenging to conduct across diverse locations and large participant pools. To partially bridge this gap, we investigate the potential of using LLM-based chatbots to support qualitative data collection through interview probes embedded in surveys. We assess four theory-based interview probes: descriptive, idiographic, clarifying, and explanatory. Through a split-plot study design (N=64), we compare the probes' impact on response quality and user experience across three key stages of HCI research: exploration, requirements gathering, and evaluation. Our results show that probes facilitate the collection of high-quality survey data, with specific probes proving effective at different research stages. We contribute practical and methodological implications for using chatbots as research tools to enrich qualitative data collection.
Similar Papers
AI-Assisted Conversational Interviewing: Effects on Data Quality and User Experience
Human-Computer Interaction
AI chatbots get better answers from people.
Automated Survey Collection with LLM-based Conversational Agents
Computation and Language
AI calls people to ask health questions.
How Do LLMs Persuade? Linear Probes Can Uncover Persuasion Dynamics in Multi-Turn Conversations
Computation and Language
Helps computers understand how people are persuaded.