Fine-Tuning Large Multimodal Models for Automatic Pronunciation Assessment
By: Ke Wang , Wenning Wei , Yan Deng and more
Potential Business Impact:
Helps computers judge how well you speak.
Automatic Pronunciation Assessment (APA) is critical for Computer-Assisted Language Learning (CALL), requiring evaluation across multiple granularities and aspects. Large Multimodal Models (LMMs) present new opportunities for APA, but their effectiveness in fine-grained assessment remains uncertain. This work investigates fine-tuning LMMs for APA using the Speechocean762 dataset and a private corpus. Fine-tuning significantly outperforms zero-shot settings and achieves competitive results on single-granularity tasks compared to public and commercial systems. The model performs well at word and sentence levels, while phoneme-level assessment remains challenging. We also observe that the Pearson Correlation Coefficient (PCC) reaches 0.9, whereas Spearman's rank Correlation Coefficient (SCC) remains around 0.6, suggesting that SCC better reflects ordinal consistency. These findings highlight both the promise and limitations of LMMs for APA and point to future work on fine-grained modeling and rank-aware evaluation.
Similar Papers
English Pronunciation Evaluation without Complex Joint Training: LoRA Fine-tuned Speech Multimodal LLM
Computation and Language
Helps computers judge and fix speaking mistakes.
Exploring the Potential of Large Multimodal Models as Effective Alternatives for Pronunciation Assessment
Sound
Helps computers judge how well you speak.
Multi-task Pretraining for Enhancing Interpretable L2 Pronunciation Assessment
Computation and Language
Helps people speak new languages better.