Score: 2

Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents

Published: April 24, 2025 | arXiv ID: 2504.17934v2

By: Chaoran Chen , Zhiping Zhang , Ibrahim Khalilov and more

BigTech Affiliations: University of Washington Johns Hopkins University

Potential Business Impact:

Protects your private info from smart computer helpers.

Business Areas:
Human Computer Interaction Design, Science and Engineering

The rise of Large Language Models (LLMs) has revolutionized Graphical User Interface (GUI) automation through LLM-powered GUI agents, yet their ability to process sensitive data with limited human oversight raises significant privacy and security risks. This position paper identifies three key risks of GUI agents and examines how they differ from traditional GUI automation and general autonomous agents. Despite these risks, existing evaluations focus primarily on performance, leaving privacy and security assessments largely unexplored. We review current evaluation metrics for both GUI and general LLM agents and outline five key challenges in integrating human evaluators for GUI agent assessments. To address these gaps, we advocate for a human-centered evaluation framework that incorporates risk assessments, enhances user awareness through in-context consent, and embeds privacy and security considerations into GUI agent design and evaluation.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Human-Computer Interaction