Towards Recommending Usability Improvements with Multimodal Large Language Models
By: Sebastian Lubos , Alexander Felfernig , Gerhard Leitner and more
Potential Business Impact:
Computers can now find website problems faster.
Usability describes a set of essential quality attributes of user interfaces (UI) that influence human-computer interaction. Common evaluation methods, such as usability testing and inspection, are effective but resource-intensive and require expert involvement. This makes them less accessible for smaller organizations. Recent advances in multimodal LLMs offer promising opportunities to automate usability evaluation processes partly by analyzing textual, visual, and structural aspects of software interfaces. To investigate this possibility, we formulate usability evaluation as a recommendation task, where multimodal LLMs rank usability issues by severity. We conducted an initial proof-of-concept study to compare LLM-generated usability improvement recommendations with usability expert assessments. Our findings indicate the potential of LLMs to enable faster and more cost-effective usability evaluation, which makes it a practical alternative in contexts with limited expert resources.
Similar Papers
Towards LLM-Based Usability Analysis for Recommender User Interfaces
Human-Computer Interaction
Helps make apps easier to use by checking their design.
Catching UX Flaws in Code: Leveraging LLMs to Identify Usability Flaws at the Development Stage
Software Engineering
Computers check websites for problems faster.
MLLM as a UI Judge: Benchmarking Multimodal LLMs for Predicting Human Perception of User Interfaces
Human-Computer Interaction
AI helps designers pick the best app looks.