Speech Recognition on TV Series with Video-guided Post-ASR Correction
By: Haoyuan Yang , Yue Zhang , Liqiang Jing and more
Potential Business Impact:
Makes talking movies easier to understand.
Automatic Speech Recognition (ASR) has achieved remarkable success with deep learning, driving advancements in conversational artificial intelligence, media transcription, and assistive technologies. However, ASR systems still struggle in complex environments such as TV series, where multiple speakers, overlapping speech, domain-specific terminology, and long-range contextual dependencies pose significant challenges to transcription accuracy. Existing approaches fail to explicitly leverage the rich temporal and contextual information available in the video. To address this limitation, we propose a Video-Guided Post-ASR Correction (VPC) framework that uses a Video-Large Multimodal Model (VLMM) to capture video context and refine ASR outputs. Evaluations on a TV-series benchmark show that our method consistently improves transcription accuracy in complex multimedia environments.
Similar Papers
Visual-Aware Speech Recognition for Noisy Scenarios
Computation and Language
Helps computers hear speech in noisy places.
Better Pseudo-labeling with Multi-ASR Fusion and Error Correction by SpeechLLM
Audio and Speech Processing
Makes computers understand spoken words better.
CMT-LLM: Contextual Multi-Talker ASR Utilizing Large Language Models
Audio and Speech Processing
Helps computers understand many people talking at once.