Score: 1

Synthetic Cognitive Walkthrough: Aligning Large Language Model Performance with Human Cognitive Walkthrough

Published: December 3, 2025 | arXiv ID: 2512.03568v1

By: Ruican Zhong, David W. McDonald, Gary Hsieh

BigTech Affiliations: University of Washington

Potential Business Impact:

Automates testing apps for easier use.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Conducting usability testing like cognitive walkthrough (CW) can be costly. Recent developments in large language models (LLMs), with visual reasoning and UI navigation capabilities, present opportunities to automate CW. We explored whether LLMs (GPT-4 and Gemini-2.5-pro) can simulate human behavior in CW by comparing their walkthroughs with human participants. While LLMs could navigate interfaces and provide reasonable rationales, their behavior differed from humans. LLM-prompted CW achieved higher task completion rates than humans and followed more optimal navigation paths, while identifying fewer potential failure points. However, follow-up studies demonstrated that with additional prompting, LLMs can predict human-identified failure points, aligning their performance with human participants. Our work highlights that while LLMs may not replicate human behaviors exactly, they can be leveraged for scaling usability walkthroughs and providing UI insights, offering a valuable complement to traditional usability testing.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
Human-Computer Interaction