Synthetic Cognitive Walkthrough: Aligning Large Language Model Performance with Human Cognitive Walkthrough
By: Ruican Zhong, David W. McDonald, Gary Hsieh
Potential Business Impact:
Automates testing apps for easier use.
Conducting usability testing like cognitive walkthrough (CW) can be costly. Recent developments in large language models (LLMs), with visual reasoning and UI navigation capabilities, present opportunities to automate CW. We explored whether LLMs (GPT-4 and Gemini-2.5-pro) can simulate human behavior in CW by comparing their walkthroughs with human participants. While LLMs could navigate interfaces and provide reasonable rationales, their behavior differed from humans. LLM-prompted CW achieved higher task completion rates than humans and followed more optimal navigation paths, while identifying fewer potential failure points. However, follow-up studies demonstrated that with additional prompting, LLMs can predict human-identified failure points, aligning their performance with human participants. Our work highlights that while LLMs may not replicate human behaviors exactly, they can be leveraged for scaling usability walkthroughs and providing UI insights, offering a valuable complement to traditional usability testing.
Similar Papers
Large Language Models Do Not Simulate Human Psychology
Artificial Intelligence
Computers can't think like people in studies.
Real-Time World Crafting: Generating Structured Game Behaviors from Natural Language with Large Language Models
Human-Computer Interaction
Lets players "program" game actions with words.
From Prompting to Partnering: Personalization Features for Human-LLM Interactions
Human-Computer Interaction
Makes AI easier to use and understand.