Score: 0

Visual-Aware Speech Recognition for Noisy Scenarios

Published: April 9, 2025 | arXiv ID: 2504.07229v1

By: Lakshmipathi Balaji, Karan Singla

Potential Business Impact:

Helps computers hear speech in noisy places.

Business Areas:
Speech Recognition Data and Analytics, Software

Humans have the ability to utilize visual cues, such as lip movements and visual scenes, to enhance auditory perception, particularly in noisy environments. However, current Automatic Speech Recognition (ASR) or Audio-Visual Speech Recognition (AVSR) models often struggle in noisy scenarios. To solve this task, we propose a model that improves transcription by correlating noise sources to visual cues. Unlike works that rely on lip motion and require the speaker's visibility, we exploit broader visual information from the environment. This allows our model to naturally filter speech from noise and improve transcription, much like humans do in noisy scenarios. Our method re-purposes pretrained speech and visual encoders, linking them with multi-headed attention. This approach enables the transcription of speech and the prediction of noise labels in video inputs. We introduce a scalable pipeline to develop audio-visual datasets, where visual cues correlate to noise in the audio. We show significant improvements over existing audio-only models in noisy scenarios. Results also highlight that visual cues play a vital role in improved transcription accuracy.

Page Count
7 pages

Category
Computer Science:
Computation and Language