Do Slides Help? Multi-modal Context for Automatic Transcription of Conference Talks
By: Supriti Sinhamahapatra, Jan Niehues
Potential Business Impact:
Helps computers understand talks with slides better.
State-of-the-art (SOTA) Automatic Speech Recognition (ASR) systems primarily rely on acoustic information while disregarding additional multi-modal context. However, visual information are essential in disambiguation and adaptation. While most work focus on speaker images to handle noise conditions, this work also focuses on integrating presentation slides for the use cases of scientific presentation. In a first step, we create a benchmark for multi-modal presentation including an automatic analysis of transcribing domain-specific terminology. Next, we explore methods for augmenting speech models with multi-modal information. We mitigate the lack of datasets with accompanying slides by a suitable approach of data augmentation. Finally, we train a model using the augmented dataset, resulting in a relative reduction in word error rate of approximately 34%, across all words and 35%, for domain-specific terms compared to the baseline model.
Similar Papers
MLLM-based Speech Recognition: When and How is Multimodality Beneficial?
Sound
Helps computers hear better in noisy places.
SlideItRight: Using AI to Find Relevant Slides and Provide Feedback for Open-Ended Questions
Human-Computer Interaction
AI gives students better feedback with pictures.
SlideBot: A Multi-Agent Framework for Generating Informative, Reliable, Multi-Modal Presentations
Artificial Intelligence
Makes smart computer programs create better school slides.