The Speech-LLM Takes It All: A Truly Fully End-to-End Spoken Dialogue State Tracking Approach
By: Nizar El Ghazal, Antoine Caubrière, Valentin Vielzeuf
Potential Business Impact:
Lets computers understand long spoken chats better.
This paper presents a comparative study of context management strategies for end-to-end Spoken Dialog State Tracking using Speech-LLMs. We systematically evaluate traditional multimodal context (combining text history and spoken current turn), full spoken history, and compressed spoken history approaches. Our experiments on the SpokenWOZ corpus demonstrate that providing the full spoken conversation as input yields the highest performance among models of similar size, significantly surpassing prior methods. Furthermore, we show that attention-pooling-based compression of the spoken history offers a strong trade-off, maintaining competitive accuracy with reduced context size. Detailed analysis confirms that improvements stem from more effective context utilization.
Similar Papers
Joint Speech and Text Training for LLM-Based End-to-End Spoken Dialogue State Tracking
Computation and Language
Lets computers understand spoken words in new situations.
Spoken Conversational Agents with Large Language Models
Computation and Language
Lets computers understand and talk like people.
Hearing to Translate: The Effectiveness of Speech Modality Integration into LLMs
Computation and Language
New AI better at translating spoken words.