Automatic Proficiency Assessment in L2 English Learners
By: Armita Mohammadi , Alessandro Lameiras Koerich , Laureano Moro-Velazquez and more
Potential Business Impact:
Lets computers grade English speaking tests.
Second language proficiency (L2) in English is usually perceptually evaluated by English teachers or expert evaluators, with the inherent intra- and inter-rater variability. This paper explores deep learning techniques for comprehensive L2 proficiency assessment, addressing both the speech signal and its correspondent transcription. We analyze spoken proficiency classification prediction using diverse architectures, including 2D CNN, frequency-based CNN, ResNet, and a pretrained wav2vec 2.0 model. Additionally, we examine text-based proficiency assessment by fine-tuning a BERT language model within resource constraints. Finally, we tackle the complex task of spontaneous dialogue assessment, managing long-form audio and speaker interactions through separate applications of wav2vec 2.0 and BERT models. Results from experiments on EFCamDat and ANGLISH datasets and a private dataset highlight the potential of deep learning, especially the pretrained wav2vec 2.0 model, for robust automated L2 proficiency evaluation.
Similar Papers
The NTNU System at the S&I Challenge 2025 SLA Open Track
Computation and Language
Tests speaking skills better by combining sound and words.
Exploiting the English Vocabulary Profile for L2 word-level vocabulary assessment with LLMs
Computation and Language
Tests how well people use words in sentences.
Classifying German Language Proficiency Levels Using Large Language Models
Computation and Language
Helps teachers know how well students read German.