Assessment of L2 Oral Proficiency using Speech Large Language Models
By: Rao Ma , Mengjie Qian , Siyuan Tang and more
Potential Business Impact:
Helps computers grade how well people speak English.
The growing population of L2 English speakers has increased the demand for developing automatic graders for spoken language assessment (SLA). Historically, statistical models, text encoders, and self-supervised speech models have been utilised for this task. However, cascaded systems suffer from the loss of information, while E2E graders also have limitations. With the recent advancements of multi-modal large language models (LLMs), we aim to explore their potential as L2 oral proficiency graders and overcome these issues. In this work, we compare various training strategies using regression and classification targets. Our results show that speech LLMs outperform all previous competitive baselines, achieving superior performance on two datasets. Furthermore, the trained grader demonstrates strong generalisation capabilities in the cross-part or cross-task evaluation, facilitated by the audio understanding knowledge acquired during LLM pre-training.
Similar Papers
Automatic Proficiency Assessment in L2 English Learners
Computation and Language
Lets computers grade English speaking tests.
Evaluating Self-Supervised Speech Models via Text-Based LLMS
Sound
Lets computers check how well other computers learned.
Session-Level Spoken Language Assessment with a Multimodal Foundation Model via Multi-Target Learning
Computation and Language
Tests how well people speak a new language.