Natural Language-based Assessment of L2 Oral Proficiency using LLMs
By: Stefano Bannò , Rao Ma , Mengjie Qian and more
Potential Business Impact:
Lets computers grade language tests like people.
Natural language-based assessment (NLA) is an approach to second language assessment that uses instructions - expressed in the form of can-do descriptors - originally intended for human examiners, aiming to determine whether large language models (LLMs) can interpret and apply them in ways comparable to human assessment. In this work, we explore the use of such descriptors with an open-source LLM, Qwen 2.5 72B, to assess responses from the publicly available S&I Corpus in a zero-shot setting. Our results show that this approach - relying solely on textual information - achieves competitive performance: while it does not outperform state-of-the-art speech LLMs fine-tuned for the task, it surpasses a BERT-based model trained specifically for this purpose. NLA proves particularly effective in mismatched task settings, is generalisable to other data types and languages, and offers greater interpretability, as it is grounded in clearly explainable, widely applicable language descriptors.
Similar Papers
Assessment of L2 Oral Proficiency using Speech Large Language Models
Computation and Language
Helps computers grade how well people speak English.
Session-Level Spoken Language Assessment with a Multimodal Foundation Model via Multi-Target Learning
Computation and Language
Tests how well people speak a new language.
Integration of LLM Quality Assurance into an NLG System
Computation and Language
Fixes writing mistakes in computer-made stories.