Score: 0

Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach

Published: May 20, 2025 | arXiv ID: 2505.14336v2

By: Umberto Cappellazzo , Minsu Kim , Stavros Petridis and more

Potential Business Impact:

Helps computers understand speech even in loud places.

Business Areas:
Speech Recognition Data and Analytics, Software

Audio-Visual Speech Recognition (AVSR) enhances robustness in noisy environments by integrating visual cues. While recent advances integrate Large Language Models (LLMs) into AVSR, their high computational cost hinders deployment in resource-constrained settings. To address this, we propose Llama-SMoP, an efficient Multimodal LLM that employs a Sparse Mixture of Projectors (SMoP) module to scale model capacity without increasing inference costs. By incorporating sparsely-gated mixture-of-experts (MoE) projectors, Llama-SMoP enables the use of smaller LLMs while maintaining strong performance. We explore three SMoP configurations and show that Llama-SMoP DEDR (Disjoint-Experts, Disjoint-Routers), which uses modality-specific routers and experts, achieves superior performance on ASR, VSR, and AVSR tasks. Ablation studies confirm its effectiveness in expert activation, scalability, and noise robustness.

Country of Origin
🇬🇧 United Kingdom

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing