Spoken Conversational Agents with Large Language Models
By: Chao-Han Huck Yang, Andreas Stolcke, Larry Heck
Potential Business Impact:
Lets computers understand and talk like people.
Spoken conversational agents are converging toward voice-native LLMs. This tutorial distills the path from cascaded ASR/NLU to end-to-end, retrieval-and vision-grounded systems. We frame adaptation of text LLMs to audio, cross-modal alignment, and joint speech-text training; review datasets, metrics, and robustness across accents and compare design choices (cascaded vs. E2E, post-ASR correction, streaming). We link industrial assistants to current open-domain and task-oriented agents, highlight reproducible baselines, and outline open problems in privacy, safety, and evaluation. Attendees leave with practical recipes and a clear systems-level roadmap.
Similar Papers
A Multimodal Conversational Agent for Tabular Data Analysis
Artificial Intelligence
Talks to data, answers with charts or words.
From Language to Action: A Review of Large Language Models as Autonomous Agents and Tool Users
Computation and Language
AI learns to think, plan, and improve itself.
DiscussLLM: Teaching Large Language Models When to Speak
Computation and Language
AI learns to talk when it has something useful to say.