VALLR-Pin: Dual-Decoding Visual Speech Recognition for Mandarin with Pinyin-Guided LLM Refinement
By: Chang Sun , Dongliang Xie , Bo Qin and more
Visual Speech Recognition aims to transcribe spoken words from silent lip-motion videos. This task is particularly challenging for Mandarin, as visemes are highly ambiguous and homophones are prevalent. We propose VALLR-Pin, a novel two-stage framework that extends the recent VALLR architecture from English to Mandarin. First, a shared video encoder feeds into dual decoders, which jointly predict both Chinese character sequences and their standard Pinyin romanization. The multi-task learning of character and phonetic outputs fosters robust visual-semantic representations. During inference, the text decoder generates multiple candidate transcripts. We construct a prompt by concatenating the Pinyin output with these candidate Chinese sequences and feed it to a large language model to resolve ambiguities and refine the transcription. This provides the LLM with explicit phonetic context to correct homophone-induced errors. Finally, we fine-tune the LLM on synthetic noisy examples: we generate imperfect Pinyin-text pairs from intermediate VALLR-Pin checkpoints using the training data, creating instruction-response pairs for error correction. This endows the LLM with awareness of our model's specific error patterns. In summary, VALLR-Pin synergizes visual features with phonetic and linguistic context to improve Mandarin lip-reading performance.
Similar Papers
Phoneme-Level Visual Speech Recognition via Point-Visual Fusion and Language Model Reconstruction
CV and Pattern Recognition
Lets computers "hear" words from lip movements.
RVLF: A Reinforcing Vision-Language Framework for Gloss-Free Sign Language Translation
CV and Pattern Recognition
Translates sign language into words better.
More Than the Final Answer: Improving Visual Extraction and Logical Consistency in Vision-Language Models
CV and Pattern Recognition
Makes AI better at seeing and thinking.