Exploring the Potential of Large Multimodal Models as Effective Alternatives for Pronunciation Assessment
By: Ke Wang , Lei He , Kun Liu and more
Potential Business Impact:
Helps computers judge how well you speak.
Large Multimodal Models (LMMs) have demonstrated exceptional performance across a wide range of domains. This paper explores their potential in pronunciation assessment tasks, with a particular focus on evaluating the capabilities of the Generative Pre-trained Transformer (GPT) model, specifically GPT-4o. Our study investigates its ability to process speech and audio for pronunciation assessment across multiple levels of granularity and dimensions, with an emphasis on feedback generation and scoring. For our experiments, we use the publicly available Speechocean762 dataset. The evaluation focuses on two key aspects: multi-level scoring and the practicality of the generated feedback. Scoring results are compared against the manual scores provided in the Speechocean762 dataset, while feedback quality is assessed using Large Language Models (LLMs). The findings highlight the effectiveness of integrating LMMs with traditional methods for pronunciation assessment, offering insights into the model's strengths and identifying areas for further improvement.
Similar Papers
Fine-Tuning Large Multimodal Models for Automatic Pronunciation Assessment
Computation and Language
Helps computers judge how well you speak.
Multimodal Large Language Models for Image, Text, and Speech Data Augmentation: A Survey
CV and Pattern Recognition
Makes computer learning better with more varied examples.
The Effectiveness of Large Language Models in Transforming Unstructured Text to Standardized Formats
Artificial Intelligence
Turns messy text into organized lists.