Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach
By: Umberto Cappellazzo , Minsu Kim , Stavros Petridis and more
Potential Business Impact:
Helps computers understand speech even in loud places.
Audio-Visual Speech Recognition (AVSR) enhances robustness in noisy environments by integrating visual cues. While recent advances integrate Large Language Models (LLMs) into AVSR, their high computational cost hinders deployment in resource-constrained settings. To address this, we propose Llama-SMoP, an efficient Multimodal LLM that employs a Sparse Mixture of Projectors (SMoP) module to scale model capacity without increasing inference costs. By incorporating sparsely-gated mixture-of-experts (MoE) projectors, Llama-SMoP enables the use of smaller LLMs while maintaining strong performance. We explore three SMoP configurations and show that Llama-SMoP DEDR (Disjoint-Experts, Disjoint-Routers), which uses modality-specific routers and experts, achieves superior performance on ASR, VSR, and AVSR tasks. Ablation studies confirm its effectiveness in expert activation, scalability, and noise robustness.
Similar Papers
Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs
CV and Pattern Recognition
Lets computers understand speech better, even with noise.
MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with Minimal Multimodal Speech Tokens
CV and Pattern Recognition
Lets computers understand talking better, even with noise.
Dynamic Multi-Expert Projectors with Stabilized Routing for Multilingual Speech Recognition
Computation and Language
Helps computers understand many languages spoken.