From Hype to Insight: Rethinking Large Language Model Integration in Visual Speech Recognition
By: Rishabh Jain, Naomi Harte
Potential Business Impact:
Helps computers understand spoken words from lip movements.
Advances in self-supervised encoders have improved Visual Speech Recognition (VSR). Recent approaches integrating these encoders with LLM decoders improves transcription accuracy; however, it remains unclear whether these gains stem from visual understanding or stronger language modeling. In this work, we systematically evaluate LLM decoders by freezing or selectively updating the visual encoder, scaling decoder size, comparing adaptation strategies and architectures, and varying training data across LRS2, LRS3, and their combination. Evaluation on LRS2, LRS3, and WildVSR shows that scaling and adaptation yield limited improvements, while combining datasets enhances generalization. Semantic analysis reveals that gains arise primarily from lexical rather than semantic processing. Our Llama-2-13B model trained on the combined set achieves 24.7\% WER on LRS3 and 47.0\% on WildVSR, establishing SOTA among models trained without additional supervision. Our findings indicate LLM decoders refine contextual reasoning rather than visual features, emphasizing the need for stronger visual encoders to drive meaningful progress.
Similar Papers
Leveraging Large Language Models in Visual Speech Recognition: Model Scaling, Context-Aware Decoding, and Iterative Polishing
CV and Pattern Recognition
Makes computers understand talking by watching lips.
Omni-AVSR: Towards Unified Multimodal Speech Recognition with Large Language Models
Audio and Speech Processing
Lets one computer understand talking from sound and sight.
Vision-Enhanced Large Language Models for High-Resolution Image Synthesis and Multimodal Data Interpretation
CV and Pattern Recognition
Makes computers create clearer pictures from words.