Improving Named Entity Transcription with Contextual LLM-based Revision
By: Viet Anh Trinh, Xinlu He, Jacob Whitehill
Potential Business Impact:
Fixes computer speech errors for important names.
With recent advances in modeling and the increasing amount of supervised training data, automatic speech recognition (ASR) systems have achieved remarkable performance on general speech. However, the word error rate (WER) of state-of-the-art ASR remains high for named entities. Since named entities are often the most critical keywords, misrecognizing them can affect all downstream applications, especially when the ASR system functions as the front end of a complex system. In this paper, we introduce a large language model (LLM) revision mechanism to revise incorrect named entities in ASR predictions by leveraging the LLM's reasoning ability as well as local context (e.g., lecture notes) containing a set of correct named entities. Finally, we introduce the NER-MIT-OpenCourseWare dataset, containing 45 hours of data from MIT courses for development and testing. On this dataset, our proposed technique achieves up to 30\% relative WER reduction for named entities.
Similar Papers
Customizing Speech Recognition Model with Large Language Model Feedback
Computation and Language
Helps computers understand rare words in speech.
CMT-LLM: Contextual Multi-Talker ASR Utilizing Large Language Models
Audio and Speech Processing
Helps computers understand many people talking at once.
SpeechLLM: Unified Speech and Language Model for Enhanced Multi-Task Understanding in Low Resource Settings
Computation and Language
Lets computers understand spoken words for tasks.