Phoneme-Level Visual Speech Recognition via Point-Visual Fusion and Language Model Reconstruction
By: Matthew Kit Khinn Teng, Haibo Zhang, Takeshi Saitoh
Potential Business Impact:
Lets computers "hear" words from lip movements.
Visual Automatic Speech Recognition (V-ASR) is a challenging task that involves interpreting spoken language solely from visual information, such as lip movements and facial expressions. This task is notably challenging due to the absence of auditory cues and the visual ambiguity of phonemes that exhibit similar visemes-distinct sounds that appear identical in lip motions. Existing methods often aim to predict words or characters directly from visual cues, but they commonly suffer from high error rates due to viseme ambiguity and require large amounts of pre-training data. We propose a novel phoneme-based two-stage framework that fuses visual and landmark motion features, followed by an LLM model for word reconstruction to address these challenges. Stage 1 consists of V-ASR, which outputs the predicted phonemes, thereby reducing training complexity. Meanwhile, the facial landmark features address speaker-specific facial characteristics. Stage 2 comprises an encoder-decoder LLM model, NLLB, that reconstructs the output phonemes back to words. Besides using a large visual dataset for deep learning fine-tuning, our PV-ASR method demonstrates superior performance by achieving 17.4% WER on the LRS2 and 21.0% WER on the LRS3 dataset.
Similar Papers
Visual-Aware Speech Recognition for Noisy Scenarios
Computation and Language
Helps computers hear speech in noisy places.
Designing Practical Models for Isolated Word Visual Speech Recognition
CV and Pattern Recognition
Lets computers understand talking from lip movements.
Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better
CV and Pattern Recognition
Teaches computers to understand pictures better.