Approaching Dialogue State Tracking via Aligning Speech Encoders and LLMs
By: Šimon Sedláček , Bolaji Yusuf , Ján Švec and more
Potential Business Impact:
Helps computers understand what people say.
In this work, we approach spoken Dialogue State Tracking (DST) by bridging the representation spaces of speech encoders and LLMs via a small connector module, with a focus on fully open-sourced and open-data components (WavLM-large, OLMo). We focus on ablating different aspects of such systems including full/LoRA adapter fine-tuning, the effect of agent turns in the dialogue history, as well as fuzzy matching-based output post-processing, which greatly improves performance of our systems on named entities in the dialogue slot values. We conduct our experiments on the SpokenWOZ dataset, and additionally utilize the Speech-Aware MultiWOZ dataset to augment our training data. Ultimately, our best-performing WavLM + connector + OLMo-1B aligned models achieve state of the art on the SpokenWOZ test set (34.66% JGA), and our system with Gemma-2-9B-instruct further surpasses this result, reaching 42.17% JGA on SpokenWOZ test.
Similar Papers
Interpretable and Robust Dialogue State Tracking via Natural Language Summarization with LLMs
Computation and Language
Helps chatbots understand what you're saying better.
Joint Speech and Text Training for LLM-Based End-to-End Spoken Dialogue State Tracking
Computation and Language
Lets computers understand spoken words in new situations.
Factors affecting the in-context learning abilities of LLMs for dialogue state tracking
Computation and Language
Helps computers understand what you're saying in chats.