Multi-Tool Analysis of User Interface & Accessibility in Deployed Web-Based Chatbots
By: Mukesh Rajmohan, Smit Desai, Sanchari Das
Potential Business Impact:
Finds chatbots that are hard for some people to use.
In this work, we present a multi-tool evaluation of 106 deployed web-based chatbots, across domains like healthcare, education and customer service, comprising both standalone applications and embedded widgets using automated tools (Google Lighthouse, PageSpeed Insights, SiteImprove Accessibility Checker) and manual audits (Microsoft Accessibility Insights). Our analysis reveals that over 80% of chatbots exhibit at least one critical accessibility issue, and 45% suffer from missing semantic structures or ARIA role misuse. Furthermore, we found that accessibility scores correlate strongly across tools (e.g., Lighthouse vs PageSpeed Insights, r = 0.861), but performance scores do not (r = 0.436), underscoring the value of a multi-tool approach. We offer a replicable evaluation insights and actionable recommendations to support the development of user-friendly conversational interfaces.
Similar Papers
Toward Inclusive Low-Code Development: Detecting Accessibility Issues in User Reviews
Software Engineering
Helps make apps easier for people with bad eyesight.
Multi-Faceted Evaluation of Tool-Augmented Dialogue Systems
Computation and Language
Finds hidden mistakes in talking computer helpers.
Towards Scalable Web Accessibility Audit with MLLMs as Copilots
Artificial Intelligence
Helps make websites work for everyone.