Can AI agents understand spoken conversations about data visualizations in online meetings?
By: Rizul Sharma , Tianyu Jiang , Seokki Lee and more
Potential Business Impact:
AI understands talking about charts in meetings.
In this short paper, we present work evaluating an AI agent's understanding of spoken conversations about data visualizations in an online meeting scenario. There is growing interest in the development of AI-assistants that support meetings, such as by providing assistance with tasks or summarizing a discussion. The quality of this support depends on a model that understands the conversational dialogue. To evaluate this understanding, we introduce a dual-axis testing framework for diagnosing the AI agent's comprehension of spoken conversations about data. Using this framework, we designed a series of tests to evaluate understanding of a novel corpus of 72 spoken conversational dialogues about data visualizations. We examine diverse pipelines and model architectures, LLM vs VLM, and diverse input formats for visualizations (the chart image, its underlying source code, or a hybrid of both) to see how this affects model performance on our tests. Using our evaluation methods, we found that text-only input modalities achieved the best performance (96%) in understanding discussions of visualizations in online meetings.
Similar Papers
A Multimodal Conversational Agent for Tabular Data Analysis
Artificial Intelligence
Talks to data, answers with charts or words.
VizTA: Enhancing Comprehension of Distributional Visualization with Visual-Lexical Fused Conversational Interface
Human-Computer Interaction
Helps people understand charts by talking and showing.
Analyzing the Sensitivity of Vision Language Models in Visual Question Answering
CV and Pattern Recognition
Helps AI understand tricky questions like people.